> Is there any particular reason why ONE table wouldn't do the 
> trick just fine?
> 
> Title (varchar 255)
> Recipe (long text)
> 
> or break it out to maybe refine ingredient searches and get a 
> little fancier:
> 
> ShortDescr (long text)
> Ingredients (long text)
> Method (long text)
> KeyWords (varchar 255)
> 
> If all the guy wants to do is type in his recipes and find 
> them again, or search for all recipes which contain chili 
> peppers, wouldn't this be simpler for him to maintain and 
> deliver the same functionality?

Denormalized data is almost always a very bad thing, and would be in this
case, as well. There are all kinds of problems with storing a list of items
in a field; what will the user do when he wants to find out what ingredients
he needs to buy that week, or wants to find all the recipes that contain
paprika? Sure, you could build workarounds to allow both of those, but
that's all they'd be - workarounds. They'd cause their own problems, and
would require more code to provide the same functionality.

There are all kinds of things that can go wrong with denormalized data:
- data will be duplicated, so the database will be unnecessarily large
- duplicated data may be incorrectly entered
- searches will be less efficient - in your example, you'd have to search a
long text field to match ingredients!
- the database won't scale well - what if your chef has more than a couple
of hundred recipes?

> I'm just taking this from the user's side rather than the 
> programmer's.  
> 
> Would you be making a highly capable system whose elegance is 
> lost on a user when he has to pick one ingredient from a 
> drop-down list of 200 (and populate that list himself when he 
> needs a new ingredient not on the list)? 
> 
> I'm just suggesting that the user should have options put 
> before him: expandability/capability vs. 
> simplicity/usability, and let him decide.  Then you build 
> based on informed consent.

These are UI issues, not database design issues. All of them can be solved
relatively easily, without compromising the normalization of data. In
addition, having that normalized data allows you to provide more
functionality, and have fewer bugs and glitches, and in the long run, things
will simply work better.

I realize that my argument may not be especially convincing to you, but
until you've run into these sorts of problems enough, the path of
denormalization may look very attractive. Unfortunately, it's easy going
down that path, but hard to get back. I've seen this sort of thing a lot
with projects which start small, then get bigger.

In any case, the programmer should be on the user's side - they're not
supposed to be antagonists. The best solution for the user is one that's
easy to use, but also robust and reliable. It's up to the programmer to
deliver that solution.

Dave Watts, CTO, Fig Leaf Software
http://www.figleaf.com/
voice: (202) 797-5496
fax: (202) 797-5444

______________________________________________________________________
Get the mailserver that powers this list at http://www.coolfusion.com
FAQ: http://www.thenetprofits.co.uk/coldfusion/faq
Archives: http://www.mail-archive.com/cf-talk@houseoffusion.com/
Unsubscribe: http://www.houseoffusion.com/index.cfm?sidebar=lists

Reply via email to