> > > > > From testrun/pg_dump/002_pg_dump/log/regress_log_002_pg_dump, search > for the "not ok" and then look at what it tried to do right before > that. I see: > > pg_dump: error: prepared statement failed: ERROR: syntax error at or > near "%" > LINE 1: ..._histogram => %L::real[]) coalesce($2, format('%I.%I', > a.nsp... >
Thanks. Unfamiliar turf for me. > > > All those changes are available in the patches attached. > > How about if you provided "get" versions of the functions that return a > set of rows that match what the "set" versions expect? That would make > 0001 essentially a complete feature itself. > That's tricky. At the base level, those functions would just be an encapsulation of "SELECT * FROM pg_stats WHERE schemaname = $1 AND tablename = $2" which isn't all that much of a savings. Perhaps we can make the documentation more explicit about the source and nature of the parameters going into the pg_set_ functions. Per conversation, it would be trivial to add a helper functions that replace the parameters after the initial oid with a pg_class rowtype, and that would dissect the values needed and call the more complex function: pg_set_relation_stats( oid, pg_class) pg_set_attribute_stats( oid, pg_stats) > > I think it would also make the changes in pg_dump simpler, and the > tests in 0001 a lot simpler. > I agree. The tests are currently showing that a fidelity copy can be made from one table to another, but to do so we have to conceal the actual stats values because those are 1. not deterministic/known and 2. subject to change from version to version. I can add some sets to arbitrary values like was done for pg_set_relation_stats().