On 10/20/14, 3:49 PM, David G Johnston wrote:
Well, that is at least doable, but probably rather ugly. It would probably
>be less ugly if our test framework had a way to test for errors (ala
>pgTap).
>
>Where I was going with this is a full-on brute-force test: execute every
>possible command with autocommit turned off. We don't need to check that
>each command does what it's supposed to do, only that it can execute.
>
>Of course, the huge problem with that is knowing how to actually
>successfully run each command.:(  Theoretically the tests could be
>structured in such a way that there's a subset of tests that just see if
>the command even executes, but creating that is obviously a lot of work
>and with our current test framework probably a real pain to maintain.
 From the comments here the effort needed to prevent this particular
oversight seems excessive compared to the error it is trying to prevent - an
error that is fairly easily remedied in a minor release and which has an
easy work around.

That said can we just do:

"1) I don't know about a definitive way. I used grep to find all
    statements calling PreventTransactionChain."

and save the results to an .out file with a comment somewhere that if there
is any change to the content of this file the corresponding command should
be manually tested in psql with autocommit=on.  This seems to be what you
are saying but the psql check does not have to be automated...

Are you thinking we'd commit the expected output of the perl script and have 
the regression suite call that script to verify it? That seems like a good way 
to fix this. The only better option I can come up with is if the perl script 
generated an actual test that we know would fail if a new command showed up.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to