The question is more to do with theoretical optimisation rather than improving the speed of that specific database (which was used only for benchmarks), also note that the 700 record difference was based on around 5000 queries per second (definitly not a margin of error (this is a problem with all tables that have a large number of fields, reguardless of the database or software)).

In theory lets say I have hundreds of databases running on the same server hence I want to optimise the methods in which the database is created and queries are performed (keep in mind that that the 14% increase in processing does eventually add up).

Consider the following as the steps taken to optimise an old database:
1. Start with a database that contains 1:1 Relationships, some of which require joining.

The problem here is that a join is one of the slowest queries, hence these 1:1's must be combined.

2. Merge all 1:1 Relationships into 1 Table

Result: 1 Large table with n number of fields.
Problem: As a result fields in the tail end of the table are slow to query.

3. Break up the table as per how it is used (so long the new tables do not need to be joined)

Result: Small Tables that are do not require joining.
Problems: ??

As we all know seeking through any file takes time, by combining large 1:1 relationships you remove joins but create tables with a large number of fields.. which itself has the problems of seeking to fields in the tail end.

What I am asking is more to do with design procedure rather than the database system itself, is the solution I have shown for 1:1 the correct method for solving the seek problems or does the solution itself present unseen problems.

James Austin


Reply via email to