On 8/13/15 1:31 AM, Peter Geoghegan wrote:
On Wed, Aug 12, 2015 at 11:23 PM, Magnus Hagander <mag...@hagander.net> wrote:
The value of a core regression suite that takes less time to run has
to be weighed against the possibility that a better core regression
suite might cause us to find more bugs before committing. That could
easily be worth the price in runtime.
Or have a quickcheck you run "all the time" and then run the bigger one once
before committing perhaps?
I favor splitting the regression tests to add "all the time" and
"before commit" targets as you describe. I think that once the
facility is there, we can determine over time how expansive that
second category gets to be.
I don't know how many folks work in a github fork of Postgres, but
anyone that does could run slow checks on every single push via
Travis-CI. [1] is an example of that. That wouldn't work directly since
it depends on Peter Eisentraut's scripts [2] that pull from
apt.postgresql.org, but presumably it wouldn't be too hard to get tests
running directly out of a Postgres repo.
[1] https://travis-ci.org/decibel/variant
[2] See wget URLs in
https://github.com/decibel/variant/blob/master/.travis.yml
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers