The original question sounded like someone was asking what errors might be thrown for a routine they wrote that used other components that might directly throw exceptions or called yet others, ad nauseum.
Some have questioned the purpose. I can well imagine that if such info was available, you could construct a data structure such as a graph and trace every possible error that could propagate back. But I am not so sure it is that simple. Some of the intervening functions will quite possibly deal with those errors. If a routine downstream already knows a further function might divide by zero, it may arrange to capture that exception and deal with it. So it will not likely propagate up from that source, albeit a second path may also possibly let it propagate through on another such bad calculation that does not deal with it.. Further, some programs have code that checks the environment and may use different functions/methods if on one kind of machine or OS or version of python than another. Ideally you would need to know the profile for the build you are on. And with interpreted languages, some are not so much built ahead of time as assembled as they go. Loading an alternate module or library with functions having the same names, may bring in a very different profile of possible exceptions. And what if your function is later used by others and they do not know you added code to intercept all possible interruptions and chose how to deal with them and they write code hoping to intercept something that then never arrives. Things may fail differently than expected. Some nice programming tricks that depend on it failing sometimes, may no longer work as expected. What exactly is the right way to deal with each interrupt? Choices range from ignoring them while squelching them, to passing them along relatively unchanged, perhaps logging what came by, raising a different exception, perhaps your own new variety, or just halting everything. Any time some deeper software is changed, as for a bug fix, they may change the profile of propagated errors. A routine that once worried about running out of memory may instead change to only working on memory pre-allocated in large blocks yet leave in the code that would throw an exception if it is ever reached. It may be it can now never happen but yet you might still be on the lookout for it, especially if the documentation does not change or you do not read it each and every time. I also wonder at the complexity of the code needed. If a given error in a dozen places can generate the same exception, can you tell them apart so each has an appropriate solution or do you lump them together? Can you replace a function with a few lines of code with thousands of lines of just-in-case code and remain viable if every function does this and you have giant programs that use lots of CPU and memory? I would say it is a more laudable goal for each function to publish what interrupts they do NOT handle that might come through and perhaps why. Any they claim to handle make it moot what happens further along the stream if done right. Bulletproofing even then may seem harmless, of course. I have had cases where I intercepted ALL errors when a rare case broke code I used that I did not write. I got an odd error maybe once every million times running on random simulated data and I just wanted to throw that result away and keep a subset of those that worked and converged and passed some post.tests. It did not matter to me why it failed, just that it not be allowed to crash the program entirely. Trying to deal with every possible exception would have been overkill. So, I do sympathize with a wish to be able to decide what errors might arise and be able to decide if having your code deal with it makes sense. I just suggest whatever you do in the general case may not be optimal. It may do too much and yet allow problems to seep through. -- https://mail.python.org/mailman/listinfo/python-list