Re: [HACKERS] [patch] libpq one-row-at-a-time API
Marko Kreen writes: > On Wed, Aug 1, 2012 at 6:18 PM, Tom Lane wrote: >> So I'm working from the first set of patches in your message >> <20120721194907.ga28...@gmail.com>. > Great! Here's an updated draft patch. I've not looked at the docs yet so this doesn't include that, but I'm reasonably happy with the code changes now. The main difference from what you had is that I pushed the creation of the single-row PGresults into pqRowProcessor, so that they're formed immediately while scanning the input message. That other method with postponing examination of the rowBuf does not work, any more than it did with the original patch, because you can't assume that the network input buffer is going to hold still. For example, calling PQconsumeInput after parseInput has absorbed a row-data message but before calling PQgetResult would likely break things. In principle I suppose we could hack PQconsumeInput enough that it didn't damage the row buffer while still meeting its API contract of clearing the read-ready condition on the socket; but it wouldn't be simple or cheap to do so. regards, tom lane diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c index 1e62d8091a9d2bdf60af6745d5a01ee14ee5cf5a..7cf86176c2385b9e4ee37c72d7e3c662ea079f7a 100644 *** a/contrib/dblink/dblink.c --- b/contrib/dblink/dblink.c *** typedef struct storeInfo *** 70,75 --- 70,78 AttInMetadata *attinmeta; MemoryContext tmpcontext; char **cstrs; + /* temp storage for results to avoid leaks on exception */ + PGresult *last_res; + PGresult *cur_res; } storeInfo; /* *** static void materializeQueryResult(Funct *** 83,90 const char *conname, const char *sql, bool fail); ! static int storeHandler(PGresult *res, const PGdataValue *columns, ! const char **errmsgp, void *param); static remoteConn *getConnectionByName(const char *name); static HTAB *createConnHash(void); static void createNewConnection(const char *name, remoteConn *rconn); --- 86,93 const char *conname, const char *sql, bool fail); ! static PGresult *storeQueryResult(storeInfo *sinfo, PGconn *conn, const char *sql); ! static void storeRow(storeInfo *sinfo, PGresult *res, bool first); static remoteConn *getConnectionByName(const char *name); static HTAB *createConnHash(void); static void createNewConnection(const char *name, remoteConn *rconn); *** materializeResult(FunctionCallInfo fcinf *** 927,934 /* * Execute the given SQL command and store its results into a tuplestore * to be returned as the result of the current function. * This is equivalent to PQexec followed by materializeResult, but we make ! * use of libpq's "row processor" API to reduce per-row overhead. */ static void materializeQueryResult(FunctionCallInfo fcinfo, --- 930,939 /* * Execute the given SQL command and store its results into a tuplestore * to be returned as the result of the current function. + * * This is equivalent to PQexec followed by materializeResult, but we make ! * use of libpq's single-row mode to avoid accumulating the whole result ! * inside libpq before it gets transferred to the tuplestore. */ static void materializeQueryResult(FunctionCallInfo fcinfo, *** materializeQueryResult(FunctionCallInfo *** 944,962 /* prepTuplestoreResult must have been called previously */ Assert(rsinfo->returnMode == SFRM_Materialize); PG_TRY(); { ! /* initialize storeInfo to empty */ ! memset(&sinfo, 0, sizeof(sinfo)); ! sinfo.fcinfo = fcinfo; ! ! /* We'll collect tuples using storeHandler */ ! PQsetRowProcessor(conn, storeHandler, &sinfo); ! ! res = PQexec(conn, sql); ! ! /* We don't keep the custom row processor installed permanently */ ! PQsetRowProcessor(conn, NULL, NULL); if (!res || (PQresultStatus(res) != PGRES_COMMAND_OK && --- 949,962 /* prepTuplestoreResult must have been called previously */ Assert(rsinfo->returnMode == SFRM_Materialize); + /* initialize storeInfo to empty */ + memset(&sinfo, 0, sizeof(sinfo)); + sinfo.fcinfo = fcinfo; + PG_TRY(); { ! /* execute query, collecting any tuples into the tuplestore */ ! res = storeQueryResult(&sinfo, conn, sql); if (!res || (PQresultStatus(res) != PGRES_COMMAND_OK && *** materializeQueryResult(FunctionCallInfo *** 975,982 else if (PQresultStatus(res) == PGRES_COMMAND_OK) { /* ! * storeHandler didn't get called, so we need to convert the ! * command status string to a tuple manually */ TupleDesc tupdesc; AttInMetadata *attinmeta; --- 975,982 else if (PQresultStatus(res) == PGRES_COMMAND_OK) { /* ! * storeRow didn't get called, so we need to convert the command ! * status string to a tuple manually */ TupleDesc tupdesc; AttI
Re: [HACKERS] [patch] libpq one-row-at-a-time API
On Wed, Aug 1, 2012 at 6:18 PM, Tom Lane wrote: > Marko Kreen writes: >> * Is there better API than PQsetSingleRowMode()? New PQsend* >> functions is my alternative. > > After thinking it over, I'm really unexcited about adding new versions > of all the PQsend functions. If we had the prospect of more flags in > future that could be added to a bitmask flags argument, it would be > more palatable, but it's far from clear what those might be. So I'm > now leaning towards using PQsetSingleRowMode as-is. I can imagine such flag - I'd like to have a flag to allow send more queries to server without waiting the finish of old ones. Currently, when a batch-processing app wants to keep backend busy, they need to stuff many statements into single query. Which is ugly. Among other things it loses the possibility to separate arguments from query, and it breaks error reporting. The flag would fix this, and it would be useful also in other scenarios. Interestingly, although it affects different area, it would be also be flag for PQsend* and not for PQexec*. So maybe the direction is not completely wrong. But I cannot imagine why the PQsetSingleRowMode would be hard to keep working even if we have PQsend functions with flags, so I'm not worried about getting it exactly right from the start. >> * Should we rollback rowBuf change? I think no, as per benchmark >> it performs better than old code. > > I had already pretty much come to that conclusion just based on code > review, without thinking about performance. In particular, we had done > some nontrivial work towards improving error-case handling in the data > message parsing code, and I don't really want to give that up, nor > rewrite it on the fly now. About the only reason I could see for > reverting rowBuf was that I thought it might hurt performance; so now > that you've proven the opposite, we should leave it alone. Additional argument for rowBuf is Merlin's wish for weirdly optimized PGresults. Basically, with rowBuf anybody who wants to experiment with different ways to process row data just needs to implement single function which processes data then advances the state machine. Like I did with PQgetRowData(). Without it, quite lot of code needs to be patched. > So I'm working from the first set of patches in your message > <20120721194907.ga28...@gmail.com>. Great! -- marko -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] Support for foreign keys with arrays
Il 30/07/12 19:16, Gabriele Bartolini ha scritto: And it can be also interchanged with "Array element Foreign Key". As promised, we have sent a patch for the "Array ELEMENT foreign key" support. We are discontinuing this thread here and continue discussing the former "Foreign Keys with Arrays"/"EACH Foreign Key" feature support from here: http://archives.postgresql.org/pgsql-hackers/2012-08/msg00011.php Thank you, Gabriele -- Gabriele Bartolini - 2ndQuadrant Italia PostgreSQL Training, Services and Support gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] [PATCH] Support for Array ELEMENT Foreign Keys
Hi, please find attached version 1 of the patch introducing "Array ELEMENT Foreign Keys" support. This new thread and related patch substitute any previous discussion about "Support for foreign keys with arrays", as anticipated in http://archives.postgresql.org/pgsql-hackers/2012-07/msg01098.php This patch adds: * support for ELEMENT REFERENCES column constraint on array types - e.g. c1 INT[] ELEMENT REFERENCES t1 * support for array ELEMENT foreign key table constraints - e.g. FOREIGN KEY (ELEMENT c1) REFERENCES t1 * support for array ELEMENT foreign keys in multi-column foreign key table constraints - e.g. FOREIGN KEY (c1, ELEMENT c2) REFERENCES t1 (u1, u2) Array ELEMENT foreign keys are a special kind of foreign key constraint requiring the referencing column to be an array of elements of the same type as (or a compatible type to) the referenced column in the referenced table. Array ELEMENT foreign keys are an extension of PostgreSQL and are not included in the SQL standard. An usage example for this feature is the following: CREATE TABLE drivers ( driver_id integer PRIMARY KEY, first_name text, last_name text, ... ); CREATE TABLE races ( race_id integer PRIMARY KEY, title text, race_day DATE, ... final_positions integer[] ELEMENT REFERENCES drivers ); This initial patch present the following limitations: * Only one "ELEMENT" column allowed in a multi-column key - e.g. FOREIGN KEY (c1, ELEMENT c2, ELEMENT c3) REFERENCES t1 (u1, u2, u3) will throw an error * Supported actions: - NO ACTION - RESTRICT As noted in the last 9.2 commitfest, we feel it is important to consolidate the "array ELEMENT foreign key" syntax and to postpone decisions about referential integrity actions, allowing the community to have a clearer understanding of the feature goals and requirements. However, having array_replace() and array_remove() functions already being committed and using our previous patch as a basis, we are confident that a generally accepted syntax will come out in the next months through community collaborative dynamics. The patch includes documentation and an extensive coverage of tests (element_foreign_key.sql regression test file). Co-authors of this patch are Gabriele and Gianni from our Italian team at 2ndQuadrant. Thank you. Cheers, Marco -- Marco Nenciarini - 2ndQuadrant Italy PostgreSQL Training, Services and Support marco.nenciar...@2ndquadrant.it | www.2ndQuadrant.it Array-ELEMENT-foreign-key-v1.patch.bz2 Description: application/bzip -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] compiler barriers (was: New statistics for WAL buffer dirty writes)
On Wed, Aug 1, 2012 at 10:12 AM, Tom Lane wrote: > I think you may be right that using __asm__ __volatile__ in gcc > S_UNLOCK cases would be a big step forward, but it needs more research > to see if that's the only fix needed. Looking through the spinlock implementations in s_lock.h, we start with a bunch of GCC-ish things: - i386 uses __asm__ __volatile__ - x86_65 uses __asm__ __volatile__ - ia64 uses __asm__ __volatile__ on GCC, and _InterlockedExchange on the Intel compiler - arm uses GCC integer atomics if available, and __asm__ __volatile__ otherwise - s390 uses __asm__ __volatile__ - sparc uses __asm__ __volatile__ - ppc uses __asm__ __volatile__ - m68k uses __asm__ __volatile__ - vax uses __asm__ __volatile__ - ns32k uses __asm__ __volatile__ - alpha uses __asm__ __volatile__ - mips uses __asm__ __volatile__ - m32r calls what appears to be a system-provided tas() function - SuperH uses __asm__ __volatile__ Presumably all the ones that are using __asm__ __volatile__ for TAS() could also use that for a compiler barrier at release time, so the interesting cases are ia64, arm, and m32r. The ARM case is not a problem, because the GCC builtins are defined as compiler barriers. For the Intel compiler case on ia64, I believe the existing definition of pg_compiler_barrier() in storage/barrier.h should suffice for a release barrier. I have no idea what to do about m32r. The remaining implementations are for non-GCC compilers, which is where things obviously get harder: - Univel CC (is that sco cc?) uses an idiosyncratic asm inline syntax - Tru64/alpha uses a system-provided primitive called __LOCK_LONG_RETRY - hppa uses __asm__ __volatile__ on GCC; otherwise, it uses hpux_hppa.s - itanium uses _Asm_xchg - sgi uses OS or compiler primitives called test_and_set and test_then_and - sinix uses OS or compiler primitives called acquire_lock and release_lock - aix uses OS or compiler primitives called _check_lock and _clear_lock - sparc uses pg_atomic_cas, which is implemented in sunstudio_x86.s or sunstudio_sparc.s - win32 uses InterlockedCompareExchange() Of these, Itanium and Win32 are not too hard to handle because there's are already compiler barrier definitions in storage/barrier.h: _Asm_sched_fence() for Itanium and _ReadWriteBarrier() for Win32. The rest are anybody's guess. I suspect that SGI, SINIX, and AIX are *probably* fine as they are, because if the primitives they are using are provided by the platform, then the compiler ought to be smart enough to know what they mean. Even if they're just functions in a system library somewhere, we're already relying on the fact that a call to a globally accessible function acts as a compiler barrier, so I doubt we are any worse off if we rely on it here, too. However, the remaining cases - Univel CC, Tru64/alpha, hppa non-GCC, and sparc - probably need work. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Help me develop new commit_delay advice
On 1 August 2012 15:14, Amit Kapila wrote: > I shall look into this aspect also(setting commit_delay based on raw sync). > You also suggest if you want to run the test with different configuration. Well, I was specifically interested in testing if half of raw sync time was a widely useful setting, across a variety of different, though representative I/O subsystems. Unfortunately, without some context about raw sync speed to go along with your numbers, I cannot advance or disprove that idea. It would also have been nice to see a baseline number of 0 too, to get an idea of how effective commit_delay may now be. However, that's secondary. -- Peter Geoghegan http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training and Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [patch] libpq one-row-at-a-time API
[ getting back to this now after assorted distractions ] Marko Kreen writes: > Just to show agreement: both PQgetRowData() and optimized PGresult > do not belong to 9.2. OK, we're all on board with leaving those for later. > Only open questions are: > * Is there better API than PQsetSingleRowMode()? New PQsend* > functions is my alternative. After thinking it over, I'm really unexcited about adding new versions of all the PQsend functions. If we had the prospect of more flags in future that could be added to a bitmask flags argument, it would be more palatable, but it's far from clear what those might be. So I'm now leaning towards using PQsetSingleRowMode as-is. > * Should we rollback rowBuf change? I think no, as per benchmark > it performs better than old code. I had already pretty much come to that conclusion just based on code review, without thinking about performance. In particular, we had done some nontrivial work towards improving error-case handling in the data message parsing code, and I don't really want to give that up, nor rewrite it on the fly now. About the only reason I could see for reverting rowBuf was that I thought it might hurt performance; so now that you've proven the opposite, we should leave it alone. So I'm working from the first set of patches in your message <20120721194907.ga28...@gmail.com>. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] New statistics for WAL buffer dirty writes
On Wed, Aug 1, 2012 at 10:12 AM, Tom Lane wrote: > Solving the problem for linux only, or gcc only, isn't going to get us > to a place where we can stop volatile-izing call sites. We need to be > sure it works for every single case supported by s_lock.h. Yep, that's the problem all right. > I think you may be right that using __asm__ __volatile__ in gcc > S_UNLOCK cases would be a big step forward, but it needs more research > to see if that's the only fix needed. I agree, but I will note that I have done a fair bit of research on this already, and there are definitions in storage/barrier.h for pg_compiler_barrier() that cover gcc, icc, HP's aCC, MSVC, and Borland C. There are probably other wacky compilers out there, though: looking at the build farm, I see Sun Studio and sco cc as cases that would likely need some attention. Are there any compilers not represented in the build-farm that we'd mind breaking? If we can get working pg_compiler_barrier() definitions for all the compilers we care about, the rest is probably mostly a question of going through s_lock.h and inserting compiler barriers anywhere that they aren't already implied by the existing code. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Help me develop new commit_delay advice
> From: pgsql-hackers-ow...@postgresql.org [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Peter Geoghegan > Sent: Sunday, July 29, 2012 9:09 PM > I made what may turn out to be a useful observation during the > development of the patch, which was that for both the tpc-b.sql and > insert.sql pgbench-tools scripts, a commit_delay of half of my > wal_sync_method's reported raw sync speed looked optimal. I use Linux, > so my wal_sync_method happened to have been fdatasync. I measured this > using pg_test_fsync. I have done some basic test for commit_delay parameter OS version: suse linux 10.3 postgresql version: 9.3 dev on x86-64, compiled by gcc (GCC) 4.1.2 20070115 Machine details: 8 core cpu, 24GB RAM. Testcase: pgbench tcp_b test. Before running the benchmark suite, the buffers are loaded by using pg_prewarm utility. Test Results are attached with this mail. Run1,Run2,Run3 means the same test has ran 3 times. > It would be useful, for a start, if I had numbers for a battery-backed > write cache. I don't have access to one right now though, nor do I > have access to any more interesting hardware, which is one reason why > I'm asking for help with this. > I like to run "sync" prior to running pg_test_fsync, just in case. > [peter@peterlaptop pg_test_fsync]$ sync >I then interpret the following output: > [peter@peterlaptop pg_test_fsync]$ pg_test_fsync > 2 seconds per test > O_DIRECT supported on this platform for open_datasync and open_sync. > Compare file sync methods using one 8kB write: > (in wal_sync_method preference order, except fdatasync > is Linux's default) > open_datasync 112.940 ops/sec > fdatasync 114.035 ops/sec > fsync 21.291 ops/sec > *** SNIP *** I shall look into this aspect also(setting commit_delay based on raw sync). You also suggest if you want to run the test with different configuration. With Regards, Amit Kapila. Machine details CPU 8 core RAM 24GB OS suse linux 10.3 Server Configuration sync commit on shared buffers 1GB Pgbench tcp_b benchmark suite threads 8 clients 16 scale 75 fill factor 100 pgbench delay = 0 delay = 3000 delay = 4000 delay = 5000 TPS Total ops TPS Total ops TPS Total ops TPS Total ops Run1 968 583060 973 584130 935 561516 916 550043 Run2 971 583035 963 578560 971 585143 921 556471 Run3 995 597368 978 590085 998 599371 872 523890 Avg 978 587821 971.3 584258.3 968 582010 903 543468 -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] New statistics for WAL buffer dirty writes
Robert Haas writes: > On Tue, Jul 31, 2012 at 4:06 PM, Tom Lane wrote: >> I agree that volatile-izing everything in the vicinity is a sucky >> solution, but the last time we looked at this there did not seem to >> be a better one. > Well, Linux has a barrier() primitive which is defined as a > compiler-barrier, so I don't see why we shouldn't be able to manage > the same thing. In fact, we've already got it, though it's presently > unused; see storage/barrier.h. Solving the problem for linux only, or gcc only, isn't going to get us to a place where we can stop volatile-izing call sites. We need to be sure it works for every single case supported by s_lock.h. I think you may be right that using __asm__ __volatile__ in gcc S_UNLOCK cases would be a big step forward, but it needs more research to see if that's the only fix needed. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] New statistics for WAL buffer dirty writes
On Tue, Jul 31, 2012 at 4:06 PM, Tom Lane wrote: > Robert Haas writes: >> IMHO, the way we have it now is kind of a mess. SpinLockAcquire and >> SpinLockRelease are required to be CPU barriers, but they are not >> required to be compiler barriers. If we changed that so that they >> were required to act as barriers of both flavors, > > Since they are macros, how do you propose to do that exactly? Why does it matter that they are macros? > I agree that volatile-izing everything in the vicinity is a sucky > solution, but the last time we looked at this there did not seem to > be a better one. Well, Linux has a barrier() primitive which is defined as a compiler-barrier, so I don't see why we shouldn't be able to manage the same thing. In fact, we've already got it, though it's presently unused; see storage/barrier.h. Looking over s_lock.h, it looks like TAS is typically defined using __asm__ __volatile__, and the __asm__ is marked as clobbering memory. As the fine comments say "this prevents gcc from thinking it can cache the values of shared-memory fields across the asm code", which is another way of saying that it's a compiler barrier. However, there's no similar guard in S_UNLOCK, which is simply declared as a volatile store, and therefore compiler ordering is guaranteed only with respect to other volatile pointer references. If we added something of the form __asm__ __volatile__("" : : : "memory") in there, it should serve as a full compiler barrier. That might have to go in a static inline function as we do with TAS, but I think it should work. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Fixing syslogger rotation logic for first-time case
> From: pgsql-hackers-ow...@postgresql.org [mailto:pgsql-hackers-ow...@postgresql.org] > On Behalf Of Tom Lane > We've had a couple of complaints recently from people who were unhappy because the > syslogger's log_truncate_on_rotation logic does not fire during the first log rotation > after it's forked off from the postmaster. >Any objections? I have done the testing as per issue reported and below is result of same. Configuration -- logging_collector = on log_directory = 'pg_log' log_filename = 'postgresql-%H.log' log_truncate_on_rotation = on log_rotation_age = 1h log_rotation_size = 0 Behavior without the fix 1. Problem was that on first time rotation syslogger was appending data. Behavior After the Patch 1. Startup time file should be appended. -- working fine 2. On first rotation it should truncate the log file -- working fine With Regards, Amit Kapila. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] [PATCH] Make "psql -1 < file.sql" work as with "-f"
Dear PostgreSQL developers, Plese find attached a patch so that: Make "psql -1 < file.sql" work as with "-f" Make psql --single-transaction option work on a non-interactive standard input as well, so that "psql -1 < input.sql" behaves as "psql -1 -f input.sql". This saner/less error-prone behavior was discussed in this thread back in June: http://archives.postgresql.org/pgsql-hackers/2012-06/msg00785.php I have tested it manually and it works for me. I'm not sure this is the best possible implementation, but it is a small diff one. I haven't found a place in the regression tests where "psql" could be tested with different options. Did I miss something? -- Fabien.diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml index b6bf6a3..dc3c97e 100644 --- a/doc/src/sgml/ref/psql-ref.sgml +++ b/doc/src/sgml/ref/psql-ref.sgml @@ -512,10 +512,10 @@ PostgreSQL documentation When psql executes a script with the --f option, adding this option wraps -BEGIN/COMMIT around the script to execute it -as a single transaction. This ensures that either all the commands -complete successfully, or no changes are applied. +-f option or from a non-interactive standard input, adding +this option wraps BEGIN/COMMIT around the script +to execute it as a single transaction. This ensures that either all +the commands complete successfully, or no changes are applied. diff --git a/src/bin/psql/help.c b/src/bin/psql/help.c index 3ebf7cc..e56fd39 100644 --- a/src/bin/psql/help.c +++ b/src/bin/psql/help.c @@ -97,7 +97,8 @@ usage(void) printf(_(" -V, --versionoutput version information, then exit\n")); printf(_(" -X, --no-psqlrc do not read startup file (~/.psqlrc)\n")); printf(_(" -1 (\"one\"), --single-transaction\n" - " execute command file as a single transaction\n")); +" execute command file or non-interactive stdin\n" +" as a single transaction\n")); printf(_(" -?, --help show this help, then exit\n")); printf(_("\nInput and output options:\n")); diff --git a/src/bin/psql/settings.h b/src/bin/psql/settings.h index c907fa0..06d2407 100644 --- a/src/bin/psql/settings.h +++ b/src/bin/psql/settings.h @@ -76,6 +76,7 @@ typedef struct _psqlSettings bool notty; /* stdin or stdout is not a tty (as determined * on startup) */ + boolstdin_notty;/* stdin is not a tty (on startup) */ enum trivalue getPassword; /* prompt the user for a username and password */ FILE *cur_cmd_source; /* describe the status of the current main * loop */ diff --git a/src/bin/psql/startup.c b/src/bin/psql/startup.c index 9a6306b..fb417ce 100644 --- a/src/bin/psql/startup.c +++ b/src/bin/psql/startup.c @@ -133,7 +133,8 @@ main(int argc, char *argv[]) /* We must get COLUMNS here before readline() sets it */ pset.popt.topt.env_columns = getenv("COLUMNS") ? atoi(getenv("COLUMNS")) : 0; - pset.notty = (!isatty(fileno(stdin)) || !isatty(fileno(stdout))); + pset.stdin_notty = !isatty(fileno(stdin)); + pset.notty = (pset.stdin_notty || !isatty(fileno(stdout))); pset.getPassword = TRI_DEFAULT; @@ -314,7 +315,12 @@ main(int argc, char *argv[]) if (!pset.notty) initializeInput(options.no_readline ? 0 : 1); - successResult = MainLoop(stdin); + if (options.single_txn && pset.stdin_notty) + /* stdin is NOT a tty and -1, process as a file */ + successResult = process_file("-", true, false); + else + /* no single transation, process as if interative */ + successResult = MainLoop(stdin); } /* clean up */ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers