Robert Haas <robertmh...@gmail.com> writes: > On Wed, Feb 11, 2015 at 2:00 PM, Jeff Janes <jeff.ja...@gmail.com> wrote: >> But as far as what has been discussed on the central topic of this thread, I >> think that doing the vacuum and making the failure for non-existent tables >> be non-fatal when -f is provided would be an improvement. Or maybe just >> making it non-fatal at all times--if the table is needed and not present, >> the session will fail quite soon anyway. I don't see the other changes as >> being improvements. I would rather just learn to add the -n when I use -f >> and don't have the default tables in place, than have to learn new methods >> for saying "no really, I left -n off on purpose" when I have a custom file >> which does use the default tables and I want them vacuumed.
> So, discussion seems to have died off here. I think what Jeff is > proposing here is a reasonable compromise. Patch for that attached. +1 as to the basic behavior, but I'm not convinced that this is user-friendly reporting: + if (PQresultStatus(res) != PGRES_COMMAND_OK) + fprintf(stderr, "%s", PQerrorMessage(con)); I would be a bit surprised to see pgbench report an ERROR and then continue on anyway; I might think that was a bug, even. I am not sure exactly what it should print instead though. Some perhaps viable proposals: * don't print anything at all, just chug along. * do something like fprintf(stderr, "Ignoring: %s", PQerrorMessage(con)); * add something like "(Ignoring this error and continuing anyway)" on a line after the error message. (I realize this takes us right back into the bikeshedding game, but I do think that what's displayed is important.) regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers