On 18 Mai, 00:32, Vicent Giner <[EMAIL PROTECTED]> wrote: > Maybe this is not the right forum, but maybe you can give me some > hints or tips... > > Thank you in advance.
If Python doesn't run in production systems execution speed doesn't matter much. What actually matters when *developing* non-trivial algos is to partition the dataflow because it might take just long to reach a certain execution point that is of current interest - and it has to be reached repeatedly. For each method call one might want to work with pre-computed data that were serialized in a former run. Given Pythons flexibility this shall be easy and so it is! The combination of object pickling/unpickling and using decorators makes this a snap. Just look at the following simple decorator: def dump_data(func): def call(*args, **kwd): res = func(*args, **kwd) fl = open("dump_data.pkl", "w") pickle.Pickler(fl, 0).dump(args[0]) return res call.__name__ = func.__name__ call.__doc__ = func.__doc__ return call and the corresponding loader: def load_obj(): f = open("dump_data.pkl") return pickle.Unpickler(f).load() One might decorate an arbitrary object method foo() with dump_data and resume the activity of the object after foo has been called in another run by restoring the object with load_obj. One can further improve this by serializing the test driver when it is implemented as a generator. This is non-standard though and requires some work [1] of my own or Stackless Python [2] alternatively. What I intended to make clear is that Python has many goodies as a toolbox besides having lots of bindings and being well readable ( the usual suspects when it comes to advocation in threads like these ). [1] http://www.fiber-space.de/generator_tools/doc/generator_tools.html [2] http://www.stackless.com/ -- http://mail.python.org/mailman/listinfo/python-list