It seems that I made a thread-discipline error when I asked a question that had 
nothing to do with the frequency, or the cost, of committing when I saw this 
reply (paraphrased for brevity here) from Christophe:

> You [cannot] commit in [a] BEGIN / END [block statement] that has an 
> exception handler [because] that creates a subtransaction for the duration of 
> the [block statement].

I asked this in response (again, paraphrased for brevity):

> Could the limitation be lifted...? [Or is the limitation] rooted in 
> profoundly deep features of the architecture?

Sorry that I caused some distraction. Anyway, Tom replied immediately. He said:

> BEGIN with an exception block is a subtransaction because it's defined to 
> roll back to the database state as of the start of the block if an exception 
> occurs. COMMIT in the middle fundamentally conflicts with that, I should 
> think.


Thanks, Tom. It's clear to me now that the present PG paradigm will never, ever 
change.

So my conclusion stands for this use case: I'm using "serializable" isolation 
(which luxury ORCL doesn't afford me); I know that I can get a "cannot 
serialize" error at "commit" time. Here, I cannot take appropriate action 
within my PL/pgSQL code and hide the whole story of what this is about from 
client code. Rather, I must explain the business to the authors of the next 
tier, and teach them when, and how, retry is appropriate.

Tom asked, too, if ORCL has a different paradigm... Briefly, yes—and radically 
so. But (all of you) do please feel free to skip over my sketch here if it 
doesn't interest you.
— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — 
«
The real account of which this is a sketch is of no interest at all if you work 
only with PG and have never used ORCL. But if you need to switch, 
intellectually, from one to the other—and maybe need to do this all the time 
because your job duties span both systems—then it *is* interesting.

ORCL's PL/SQL is compiled (and optimized) at "create procedure" time. The new 
source code replaces the old in the catalog. And if it failed to compile, then 
you're left with an invalid unit that you cannot execute. Compilation errors 
are recorded in the catalog too. Further, static dependencies (proc upon proc, 
proc upon table, etc) are also recorded in the catalog. This is intimately 
connected with the hard distinction between static and dynamic SQL. The latter 
simply passes the text on "as is" into the byte code to be dealt with at 
run-time. Only statements like "select", "insert", "update", delete" and a few 
others can be static SQL. Table creation and the like must be dynamic SQL. This 
probably shocks those who move from PG to ORCL because you cannot, for example, 
create a table and then operate on it with static SQL in the same procedure.

In particular, for the present discussion, the PL/SQL block statement is a pure 
lexical device. (This is the case in PL/SQL's progenitor, ADA. And that's where 
all that stuff about DIANA, that the PL/SQL programmer eventually comes to hear 
about, comes from.) All memory that you had a block statement in the source is 
lost in the compiled so-called byte code that gets interpreted at run time. On 
the other hand, every call from PL/SQL to SQL is done in its own 
subtransaction—and if it fails, then that single statement is atomically rolled 
back. The effect of all the SQLs to date, at this moment, remains intact—but 
uncommitted. (Of course, you might have issued "commit"(s) programmatically. So 
I'm talking about SQLs that were done since the most recent "commit".) 

Significantly, the failure of a call from PL/SQL to SQL raises an exception—so 
(as well as the single-statement rollback) you now have an in-flight exception 
that flies up through successive scopes in search of a matching handler. If it 
remains unhandled at the last moment before the top-level PL/SQL "call" is due 
to finish, then a "rollback" is automatically issued. But if a handler *is* 
found, well... the exception is dead and you can carry on. Like everything else 
in programming, the code author must work out what "safe" is. (It could be to 
turn an insert that fails 'cos a unique key is violated into an update.) In 
ORCL, just as in PG, writing "when others than null" is held to be stupid. And 
code examples that do this are deemed to be not worthy of discussion.

Though the paradigms are different, each allows you properly to implement 
mission-critical applications. It's rather like English and Chinese. 
Astonishingly different. But each supports all that you need to let people 
communicate about mundane daily business, science, philosophy, epistemology, 
and so on.)
»






Reply via email to