On Thu, Nov 09, 2000 at 08:27:29PM +0000, Matt Sergeant wrote:
> On Thu, 9 Nov 2000, Ask Bjoern Hansen wrote:
> 
> > On Wed, 11 Oct 2000, Matt Sergeant wrote:
> > 
> > > > > Most modern DBMS software should be able to handle 50 queries per second
> > > > > on decent hardware, provided the conditions are right. You're not going to
> > > > > get anything better with flat files.
> > > > 
> > > > Hmm... I guess it all depends on what your queries look like, but you can
> > > > get better results from flat files if you put them in a percise layout. 
> > > > Granted if you are talking about having a million lines in a single
> > > > flat file, then I definately agree with you.
> > > 
> > > I think the limiting factors are quite a bit sooner than a million
> > > lines. What I'm trying to get across is that developers should be
> > > focussing on letting the DBMS do what a DBMS does best - queries. The DB
> > > is far better placed (and generally better developed) to do the
> > > optimisation than trying to come up with a flat file strategy that works
> > > with your system.
> > 
> > If you're always looking stuff up on simple ID numbers and
> > "stuff" is a very simple data structure, then I doubt any DBMS can
> > beat 
> > 
> >  open D, "/data/1/12/123456" or ...
> > 
> > from a fast local filesystem.
> 
> Note that Theo Schlossnagel was saying over lunch at ApacheCon that if
> your filename has more than 8 characters on Linux (ext2fs) it skips from a
> hashed algorithm to a linear algorithm (or something to that affect). So
> go careful there. I don't have more details or a URL for any information
> on this though.

Similarly on Solaris (and perhaps most SysV derivatives) path component
names longer than 16 chars (configurable) don't go into the inode
lookup cache and so require a filesystem directory lookup.

[As far as I recall. No URL either :-]

Tim.

Reply via email to