Even without considering all the good points brought up insofar, the GIL in Python (CPy at least) makes multithreading live, in general, somewhere between downright impossible and not worth trying.
Depending on the process your only option might be to manage the threading yourself inside a cpp op, or you might have no options at all if you too frequently have to halt and listen to something (going between Soft's own callbacks, event loop and so on). ICE, on the other hand, is a bit blackboxed but ultimately very comfortable and efficient at threading in some scenarios. So if you want to parallelize simply because you work on a very large set of points or something like that, then writing it in ICE form (as in an ICE node) would most likely be both the easiest and the safest way. -- Our users will know fear and cower before our software! Ship it! Ship it and let them flee like the dogs they are!