On Mar 9, 11:41 pm, Roopan <[EMAIL PROTECTED]> wrote: > Hello! > > I am looking at developing an enterprise-grade distributed data > sharing application - key requirements are productivity and platform > portability. > > Will it be sensible to use C++ for performance-critical sections and > Python for all the glue logic. > > Pls comment from your *experiences* how Python scales to large > projects( > 200KLOC). > I assume the C++/Python binding is fairly painless. > > Regards > Elam.
You might try prototyping as much as possible in Python as soon as possible. Then, and only after getting something that computes the right results, profile your prototype to see where the real bottlenecks are. Sometimes it it is not evident at the beginning where the bottlenecks in the prototype will be. If you write most of the prototype in Python then you will write less lines of code, in a shorter time and so should be more inclined to experiment with different algorithms, and do more testing. (Doctest can be great). After profiling their may be other ways to remove a bottleneck, such as using existing highly-optimised libraries such as Numpy; Psycho, an optimising interpreter that can approach C type speeds for Python code; and you could create your own C++ based libraries. You might want to ask the Mercurial development team how they got their impressive speed and functionality out of using mainly Python with critical regions in C. - Or watch this: http://video.google.com/videoplay?docid=-7724296011317502612 - Paddy. -- http://mail.python.org/mailman/listinfo/python-list