Hello there,
some helpful pointers provided in the list have put me on track to ask
further questions. Now that I'm acquainted with simulating interfaces
and duck-typing (which is still a bit fuzzy to me) I think the
information I've been looking for all along are the so-called 'design
patterns'.
Looking at the Optimizer and Stats examples, I could see how new
objects were created that built upon a basis class. Stats even had a
class to contain calculation results, which brought me even closer to
the point. But now I'm stuck again..
I imagined three interfaces to be designed, through which the main
program controls the number-crunching without constraining its
implementation. These would be:
- getInputs: responsible for passing/preparing data and parameters for
the analysis/computation
- Analysis: which implements all the crunching from FFTs to esoteric
stuff like genetic algorithms (and which should be considered opaque
from the main program POV)
- AnalysisResults: to somehow get the results back into the main program.
An additional class would cater for plotting properties, which are
usually associated with a particular analysis. Finally,
anticipating that I may want to cascade analyses, the inputs and
results must be made uniform - must be of the same form, likely numpy
arrays.
Well, with this pattern, to implement an analysis, I'd have to define
derivated getInputs and AnalysisResults classes, and code Analysis to
accept and return arrays. A flag to the analysis object could be used
to generate plot information or not.
(Hey, writing this email is already helping, I'll put the gist in my
documentation!)
So, after this brainstorming I'll do some more on the framework. I'd
be most grateful for comments on the design pattern - is it sensible,
could it be better? Are there any good literature sources on patterns
for this type of work?
Thanks,
Renato
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion