-------------- Original message ----------------------
From: "Jeremy White" <[EMAIL PROTECTED]>

> >With the following conclusions:
> >1. In all cases the new algorithm is faster. This differs
> >    from the results at work in which the Minimum test showed
> >    that the existing algorithm was faster.
> 
> I'm not convinced that it's worth speeding up this area of Win32::GUI. Even 
> if this algorithm was several orders of magnitude faster it wouldn't make 
> that much difference (if at all) to a running Win32::GUI programme. 

Excellent point and consistent with Glenn Linderman's analysis. Apart from 
personal involvement, I am a believer that where you can speed an application 
you should. The sum of all very small inefficiencies can lead to a large speed 
problem. However, it is a 'feeling' and not a certainty and I certainly see how 
there might be a reasonable cause to defer action. Other than the observation 
that if all constant search tables were reorganized into the same format as 
that used in the GUI_Constants.cpp file then it is possible to develop an 
inline binary search function which is everywhere usable, I'm going to abandon 
this effort.

And wanting to get the last word in even though I have already gotten the last 
word in, modifying all constant searches to use the same data structure reduces 
maintenance overhead. For one thing, the current hash algorithm requires coding 
for each insertion and the coding requires analysis and checkout. The new 
algorithm involves data insertion in a sorted list. No analysis. No coding. If 
it works once it works forever. So if one measure of cost is maintainence then 
the buyin is both speed and reduction in maintenance time.

And wanting to get yet another last word in over the last, last word. There is 
another useful technique in coding which I have independantly discovored (and 
since there is "nothing new under the sun" the best that can be said is that 
others have discovered it before me). Suppose that after the 'search' the user 
wants to do something. Then a companion algorithm would be:

  typedef  int *function(args); // can never remember the exact syntax

  function array[] = { &function1, &function2, ... }

  if ( *array[search(arg)] ) statement;  //direct execution of a arg dependent 
action

Executing a function after a search becomes a direct access into an array of 
functions. Since failure conditions are returned with a known value, the array 
is prepacked with a function(s) which handle error conditions. And again, 
maintenance is reduced to the extent that for new functionality the array is 
extended and the function is written but there is no need for new code 'glue' 
to determine what to do. Depending on the extent of the 'glue', performance is 
increased and the code footprint is reduced.

If you like the idea, it's worth 2 cents (jez's point). If you don't, it's no 
cents.

Thank you one and all.

art

Reply via email to