Re: [GENERAL] Create Action for psql when NOTIFY Recieved
On Mon, Aug 28, 2017 at 6:42 PM, Jerry Regan < jerry.re...@concertoglobalresources.com> wrote: > Let’s suppose for a moment that I piped the output of a psql instance to > awk or some similar program, configured to detect the NOTIFY. That program > would then spawn a process to actually perform the work, parameters being > whatever is part of the NOTIFY. Both this psql instance and the awk script > would be dedicated to this task. > > Given this is not intended in any way to be production quality code - in > fact, it’s intended to deliver XML to the client server for validation > (xmllint) in a development/test environment - do you see anything that > clearly won’t work? Also, this would be a very low volume connection. > Perhaps one NOTIFY in five minutes - or longer. > > I've been curious about having a long-running psql instance that could be controlled by an external process (named pipes/fifos I believe). It seems like you might actually have a chance to get that working if you, 1, intend to perform the notification polling automatically and, 2, don't wish to lose any notifications (i.e. you must keep the psql process that issues LISTEN running continuously). It seems you'd actually need two of these since you don't actually want the output to be sent to stdout or a normal file but rather a file that is linked to the stdin of yet another long running process. I believe *nix provides sufficient tools but whether psql is written to a sufficient level of compatibility to leverage them is something I don't know and, last time I mentioned this though, got one other person in the same boat (thoughtful but not worth the effort to investigate and R) and no one speaking up to claim they've done it already. Given your rough description I'm not seeing why you wouldn't just have the trigger write a record out to some kind of journal table and poll that table for new records whenever you wish instead of depending upon LISTEN/NOTIFY. David J.
Re: [GENERAL] Create Action for psql when NOTIFY Recieved
"David G. Johnston"writes: > On Mon, Aug 28, 2017 at 1:28 PM, Jerry Regan < > jerry.re...@concertoglobalresources.com> wrote: >> My concern is how, after LISTENing in psql, I can tell it what to do when >> the NOTItFY is received. > As far as I am aware you cannot. Yes, and psql is not designed to do anything of its own accord, so I think the answer is really "use another program". > "Whenever a command is executed, psql also polls for asynchronous > notification events generated by LISTEN and NOTIFY." Exactly. If you don't feed it a command, it just sits there. > I suspect the feature request would be something like: > \set NOTIFY_PROGRAM './process-notify-request.bash' (or an equivalent > meta-command) > And psql would invoke said program and pass the content of the notification > payload to it via stdin. Such a program could only execute after the next time you give a command to psql. You could maybe imagine feeding it a continuous stream of dummy commands, but that's pretty silly (and rather defeats the point of LISTEN, which is to *not* eat cycles while waiting). This isn't something that could be easily fixed, AFAICS. Even if we wanted to make psql pay attention to asynchronous data arrival, how would we get control back from libreadline? And what would happen if the user had typed a partial line of input? You really are much better off creating a program that opens its own connection to the DB and sits there listening. psql cannot help you meaningfully with this request, and I can't see a way to make it do so that wouldn't be a monstrous kluge. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Create Action for psql when NOTIFY Recieved
On Mon, Aug 28, 2017 at 1:28 PM, Jerry Regan < jerry.re...@concertoglobalresources.com> wrote: > My concern is how, after LISTENing in psql, I can tell it what to do when > the NOTItFY is received. > As far as I am aware you cannot. The docs for psql, and its feature set, with respect to LISTEN, are minimal and basically say psql will print out notifications to stdout (this I'm forced to assume or read the code) and will poll for notifications whenever it sends a query to the server. https://www.postgresql.org/docs/current/static/app-psql.html "Whenever a command is executed, psql also polls for asynchronous notification events generated by LISTEN and NOTIFY." I suspect the feature request would be something like: \set NOTIFY_PROGRAM './process-notify-request.bash' (or an equivalent meta-command) And psql would invoke said program and pass the content of the notification payload to it via stdin. Given what we allow for "\copy" I don't see any technical or conceptual problems with such a feature. Just needs someone to be its primary author. David J.
Re: [GENERAL] Installing 9.5 doesn't create 'postgres' unix account on Cent OS 7
Hi, On Mon, 2017-08-28 at 05:13 +, John Lawlor wrote: > yum install postgresql95 postgresql95-server.x86_64 > > Then I try to switch to the 'postgres' user normally created by installing > postgres: > > su - postgres > su: user postgres does not exist Can you please paste the full output of the yum install? Regards, -- Devrim Gündüz EnterpriseDB: https://www.enterprisedb.com PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer Twitter: @DevrimGunduz , @DevrimGunduzTR signature.asc Description: This is a digitally signed message part
[GENERAL] Create Action for psql when NOTIFY Recieved
I have a 9.4 database. I think I’ve read all the LISTEN and NOTIFY entries in PostgreSQL documentation. I’ve done internet searches, hopefully asking the correct question. But I’m stumped. I want psql to LISTEN for a NOTIFY that a trigger, yet to be defined, issues AFTER INSERT into the table. I think I know how to do all this. My concern is how, after LISTENing in psql, I can tell it what to do when the NOTItFY is received. In my internet searches, I found scripts to be used with Twisted. I don’t have Twisted, I don’t want to install Twisted. I just want psql to return a column from the table and hand it off to a process external to psql. I believe I know how to call external processes from psql. My issue returns to, how do I tell psql what to do when it receives a NOTIFY? I don’t see anything in the docs for LISTEN except to specify the channel - nothing about what to do when the NOTIFY (with what ever parameters might be included) is detected. /s/jr Consultant Concerto GR Mobile: 612.208.6601 Concerto - a composition for orchestra and a soloist
[GENERAL] Installing 9.5 doesn't create 'postgres' unix account on Cent OS 7
Hi, I am installing Postgres 9.5 on a Cent OS 7 server. I have done the following: yum install https://download.postgresql.org/pub/repos/yum/9.5/redhat/rhel-7-x86_64/pgdg-redhat95-9.5-2.noarch.rpm yum install postgresql95 postgresql95-server.x86_64 Then I try to switch to the 'postgres' user normally created by installing postgres: su - postgres su: user postgres does not exist If I install the bundled postgres that comes with Cent OS, the postgres linux account is always created. Not sure why it's not created for 9.5 Regards, John This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the intended recipient, please note that any review, dissemination, disclosure, alteration, printing, circulation, retention or transmission of this e-mail and/or any file or attachment transmitted with it, is prohibited and may be unlawful. If you have received this e-mail or any file or attachment transmitted with it in error please notify postmas...@openet.com. Although Openet has taken reasonable precautions to ensure no viruses are present in this email, we cannot accept responsibility for any loss or damage arising from the use of this email or attachments.
Re: [GENERAL] Performance with high correlation in group by on PK
On Mon, Aug 28, 2017 at 5:22 AM, Alban Hertroyswrote: > Hi all, > > It's been a while since I actually got to use PG for anything serious, > but we're finally doing some experimentation @work now to see if it is > suitable for our datawarehouse. So far it's been doing well, but there > is a particular type of query I run into that I expect we will > frequently use and that's choosing a sequential scan - and I can't > fathom why. > > This is on: > > > The query in question is: > select "VBAK_MANDT", max("VBAK_VBELN") > from staging.etl1_vbak > group by "VBAK_MANDT"; > > This is the header-table for another detail table, and in this case > we're already seeing a seqscan. The thing is, there are 15M rows in > the table (disk usage is 15GB), while the PK is on ("VBAK_MANDT", > "VBAK_VBELN") with very few distinct values for "VBAK_MANDT" (in fact, > we only have 1 at the moment!). > You need an "index skip-scan" or "loose index scan". PostgreSQL doesn't currently detect and implement them automatically, but you can use a recursive CTE to get it to work. There are some examples at https://wiki.postgresql.org/wiki/Loose_indexscan Cheers, Jeff
Re: [GENERAL] Logging the fact that a log was shipped
* Christoph Moench-Tegeder (c...@burggraben.net) wrote: > ## Stephen Frost (sfr...@snowman.net): > > > Worse, such scripts run the serious risk of losing WAL if a crash > > happens because nothing is ensuring that the WAL has been sync'd to disk > > before returning from the archive_command. > > That risk already exists when using rsync/scp/scp/... and should be > mitigated by filesystem settings on the receiving side. I was including rsync/scp/similar based tools, yes, just pointing out that such tools should be avoided when doing PG backups and WAL archiving. I have a hard time seeing "require filesystems be mounted as sync" to really be a viable solution, though I suppose it would be technically correct. Thanks! Stephen signature.asc Description: Digital signature
Re: [GENERAL] Logging the fact that a log was shipped
## Ron Johnson (ron.l.john...@cox.net): > > I'd put a wrapper around your rsync (a short shell script > > would be sufficient) in the lines of rsync ... && logger "done", > > that way you'd get the information via syslog. > > And if logging to stderr? In that case your original approach could have worked - after all, archive_command is run via a simple system() call. It might depend on the logging collector and how you started PostgreSQL in the first place... I'm not sure why it didn't work for you, but your original mail didn't have much detail in that area. Perhaps just add a -v to the rsync command line and see what happens? Regards, Christoph -- Spare Space. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Logging the fact that a log was shipped
## Stephen Frost (sfr...@snowman.net): > Worse, such scripts run the serious risk of losing WAL if a crash > happens because nothing is ensuring that the WAL has been sync'd to disk > before returning from the archive_command. That risk already exists when using rsync/scp/scp/... and should be mitigated by filesystem settings on the receiving side. > Most of the existing tools for dealing with WAL archival (pgbackrest, > barman and WAL-E, at least) already log successful and unsuccessful > archive command runs. I'm pretty sure barman supports back to 8.4 and I > know pgbackrest does. Barman would have to use rsync for the archiving with PostgreSQL 8.4 (pg_receivexlog only came along in 9.2, replication slots in 9.4). Logging will not happen on completion of archive_command (barman doesn't know much about that) but only when the xlog segment is being processed by barman - which could be quite some time later. Regards, Christoph -- Spare Space. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Logging the fact that a log was shipped
On 08/28/2017 08:22 AM, Stephen Frost wrote: * Christoph Moench-Tegeder (c...@burggraben.net) wrote: ## Ron Johnson (ron.l.john...@cox.net): How is this done in v8.4? (I tried adding "date; rsync ..." but pg didn't like that *at all*.) There's a DEBUG1-level log message on successful archive_command completion - that would give you a lot of other low-prio log messages wich you probably don't care about. I'd put a wrapper around your rsync (a short shell script would be sufficient) in the lines of rsync ... && logger "done", that way you'd get the information via syslog. On the other hand, do you really need the details about each WAL segment? Since 9.4 there's pg_stat_wal_archiver... You're really making your job harder than it needs to be with that ancient PostgreSQL... Worse, such scripts run the serious risk of losing WAL if a crash happens because nothing is ensuring that the WAL has been sync'd to disk before returning from the archive_command. Most of the existing tools for dealing with WAL archival (pgbackrest, barman and WAL-E, at least) already log successful and unsuccessful archive command runs. I'm pretty sure barman supports back to 8.4 and I know pgbackrest does. Thanks for the info on pgbackrest. -- World Peace Through Nuclear Pacification -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] jdbc driver vis Release 10
Well we won't be fixing any bugs in anything but the latest version if that makes a difference to you. Dave Cramer da...@postgresintl.com www.postgresintl.com On 25 August 2017 at 21:34, Rob Sargentwrote: > > > On Aug 25, 2017, at 5:55 PM, Dave Cramer wrote: > > There are newer versions out there! > > Dave Cramer > > da...@postgresintl.com > www.postgresintl.com > > On 25 August 2017 at 19:53, Rob Sargent wrote: > >> >> >> On 08/25/2017 05:34 PM, Dave Cramer wrote: >> >> The JDBC driver release is divorced from the server release. >> >> Thanks >> >> Dave Cramer >> >> da...@postgresintl.com >> www.postgresintl.com >> >> On 23 August 2017 at 19:33, Rob Sargent wrote: >> >>> I see no mention of a new jdbc driver on the release notes for Beta 1. >>> Does that mean there isn't one? >>> >>> >>> >>> >>> -- >>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org) >>> To make changes to your subscription: >>> http://www.postgresql.org/mailpref/pgsql-general >>> >> >> Yes, so I've learned. I just ran my software using jdbc 9.4.1212 via >> j00Q 3.9 against pg10-beta3 without a problem. I do very vanilla stuff. >> >> > > Yes. I'll get there as needed but wanted to know if any piece needed > changing NOW >
Re: [GENERAL] Logging the fact that a log was shipped
* Christoph Moench-Tegeder (c...@burggraben.net) wrote: > ## Ron Johnson (ron.l.john...@cox.net): > > > How is this done in v8.4? (I tried adding "date; rsync ..." but pg > > didn't like that *at all*.) > > There's a DEBUG1-level log message on successful archive_command > completion - that would give you a lot of other low-prio log > messages wich you probably don't care about. > I'd put a wrapper around your rsync (a short shell script > would be sufficient) in the lines of rsync ... && logger "done", > that way you'd get the information via syslog. > On the other hand, do you really need the details about each WAL > segment? Since 9.4 there's pg_stat_wal_archiver... You're really > making your job harder than it needs to be with that ancient > PostgreSQL... Worse, such scripts run the serious risk of losing WAL if a crash happens because nothing is ensuring that the WAL has been sync'd to disk before returning from the archive_command. Most of the existing tools for dealing with WAL archival (pgbackrest, barman and WAL-E, at least) already log successful and unsuccessful archive command runs. I'm pretty sure barman supports back to 8.4 and I know pgbackrest does. Thanks! Stephen signature.asc Description: Digital signature
Re: [GENERAL] Logging the fact that a log was shipped
On 08/28/2017 06:06 AM, Christoph Moench-Tegeder wrote: ## Ron Johnson (ron.l.john...@cox.net): How is this done in v8.4? (I tried adding "date; rsync ..." but pg didn't like that *at all*.) There's a DEBUG1-level log message on successful archive_command completion - that would give you a lot of other low-prio log messages wich you probably don't care about. I'd put a wrapper around your rsync (a short shell script would be sufficient) in the lines of rsync ... && logger "done", that way you'd get the information via syslog. And if logging to stderr? On the other hand, do you really need the details about each WAL segment? Since 9.4 there's pg_stat_wal_archiver... You're really making your job harder than it needs to be with that ancient PostgreSQL... That's far beyond my control. -- World Peace Through Nuclear Pacification -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Performance with high correlation in group by on PK
On 28 August 2017 at 14:22, Alban Hertroyswrote: > This is on: Just noticed I forgot to paste this in: warehouse=# select version(); version -- PostgreSQL 9.6.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit (1 row) -- If you can't see the forest for the trees, Cut the trees and you'll see there is no forest. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Performance with high correlation in group by on PK
Hi all, It's been a while since I actually got to use PG for anything serious, but we're finally doing some experimentation @work now to see if it is suitable for our datawarehouse. So far it's been doing well, but there is a particular type of query I run into that I expect we will frequently use and that's choosing a sequential scan - and I can't fathom why. This is on: The query in question is: select "VBAK_MANDT", max("VBAK_VBELN") from staging.etl1_vbak group by "VBAK_MANDT"; This is the header-table for another detail table, and in this case we're already seeing a seqscan. The thing is, there are 15M rows in the table (disk usage is 15GB), while the PK is on ("VBAK_MANDT", "VBAK_VBELN") with very few distinct values for "VBAK_MANDT" (in fact, we only have 1 at the moment!). Explain analyze says the following about this query: warehouse=# explain (analyze, buffers) select "VBAK_MANDT", max("VBAK_VBELN") from staging.etl1_vbak group by "VBAK_MANDT"; QUERY PLAN -- HashAggregate (cost=1990054.08..1990054.09 rows=1 width=36) (actual time=38723.602..38723.602 rows=1 loops=1) Group Key: "VBAK_MANDT" Buffers: shared hit=367490 read=1409344 -> Seq Scan on etl1_vbak (cost=0.00..1918980.72 rows=14214672 width=15) (actual time=8.886..31317.283 rows=14214672 loops=1) Buffers: shared hit=367490 read=1409344 Planning time: 0.126 ms Execution time: 38723.674 ms (7 rows) As you can see, a sequential scan. The statistics seem quite correct. If I change the query to select a single value of "VBAK_MANDT" we get: warehouse=# explain (analyze, buffers) select max("VBAK_VBELN") from staging.etl1_vbak where "VBAK_MANDT" = '300'; QUERY PLAN --- Result (cost=1.37..1.38 rows=1 width=32) (actual time=14.911..14.911 rows=1 loops=1) Buffers: shared hit=2 read=3 InitPlan 1 (returns $0) -> Limit (cost=0.56..1.37 rows=1 width=11) (actual time=14.907..14.908 rows=1 loops=1) Buffers: shared hit=2 read=3 -> Index Only Scan Backward using etl1_vbak_pkey on etl1_vbak (cost=0.56..11498362.31 rows=14214672 width=11) (actual time=14.906..14.906 rows=1 loops=1) Index Cond: (("VBAK_MANDT" = '300'::bpchar) AND ("VBAK_VBELN" IS NOT NULL)) Heap Fetches: 1 Buffers: shared hit=2 read=3 Planning time: 0.248 ms Execution time: 14.945 ms (11 rows) That is more in line with my expectations. Oddly enough, adding "MANDT_VBAK" and the group by back into that last query, the result is a seqscan again. For "VBAK_MANDT" we see these statistics: Null fraction: 0 Average width: 4 Distinct values: 1 Most common values: {300} Most common frequencies: {1} Histogram bounds : Correlation :1 The table definition is: Table "staging.etl1_vbak" Column| Type | Modifiers -+---+--- VBAK_MANDT | character(3) | not null VBAK_VBELN | character(10) | not null VBAK_ERDAT | date | not null VBAK_ERZET | character(6) | not null VBAK_ERNAM | character(12) | not null VBAK_ANGDT | date | not null VBAK_BNDDT | date | not null VBAK_AUDAT | date | not null ... VBAK_MULTI | character(1) | not null VBAK_SPPAYM | character(2) | not null Indexes: "etl1_vbak_pkey" PRIMARY KEY, btree ("VBAK_MANDT", "VBAK_VBELN") "idx_etl1_vbak_erdat" btree ("VBAK_ERDAT") A final remark: The table definition was generated by our reporting/ETL software, hence the case-sensitive column names and the use of the character type instead of varchar (or text). What can be done to help the planner choose a smarter plan? -- If you can't see the forest for the trees, Cut the trees and you'll see there is no forest. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Logging the fact that a log was shipped
## Ron Johnson (ron.l.john...@cox.net): > How is this done in v8.4? (I tried adding "date; rsync ..." but pg > didn't like that *at all*.) There's a DEBUG1-level log message on successful archive_command completion - that would give you a lot of other low-prio log messages wich you probably don't care about. I'd put a wrapper around your rsync (a short shell script would be sufficient) in the lines of rsync ... && logger "done", that way you'd get the information via syslog. On the other hand, do you really need the details about each WAL segment? Since 9.4 there's pg_stat_wal_archiver... You're really making your job harder than it needs to be with that ancient PostgreSQL... Regards, Christoph -- Spare Space. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [SPAM] Re: [GENERAL] Invalid magic number 0000 in log file
Il 25/08/2017 22:01, Peter Eisentraut ha scritto: On 8/25/17 14:24, Moreno Andreo wrote: I know that's related to XLOG files, but do not have idea on how to deal with it. I had a quick googling but found nothing but "your files are corrupted. You'd better initdb." That's pretty much it. Thanks, I'll report to the customer You might not have to initdb if you have an uncorrupted copy of the WAL files somewhere (i.e., a backup). But then I would have concerns about what else might be corrupted. Probably system was not booting because of a faulty hard disk, and the fault also involved some WAL files. Once reinstalled, the system marked as unusable the faulty sectors and everything is fine now, except for what was corrupted before.. I'll tell him to have a deep check of OS events and of the disk itself. Thanks Moreno.- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Extension coverage
On Sat, Aug 26, 2017 at 6:28 PM, Gabriel Furstenheim Milerudwrote: > Not sure I follow. Do you have an example that I could check? > I have the impression that my problem is that no .gcda files are created. If > I just run the lcov part: > lcov -d . -c -o lcov.info > > I get >Found gcov version: 5.4.0 >Scanning . for .gcda files ... >geninfo: WARNING: no .gcda files found in . - skipping! >Finished .info-file creation > > and an empty file lcov.info is created. > > Just to check, I've tried running make coverage and make coverage-html on > the folder of postgres instead of the extension and that works. There is no need to complicate your Makefile with a custom coverage target, which is, at least it seems to me, the origin of the problems you are seeing here. So you could just use the one that Postgres' PGXS provides. Here is a simple Makefile I have used for coverage testing with an extension: https://github.com/michaelpq/pg_plugins/blob/master/decoder_raw/Makefile If you enforce abs_top_srcdir=$(pwd) with make coverage, or coverage-html if you want, then both are able to work properly. At least for me they do. -- Michael -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Retrieving query results
On Sun, Aug 27, 2017 at 12:12 AM, Tom Lanewrote: > Michael Paquier writes: >> On Fri, Aug 25, 2017 at 8:10 AM, Tom Lane wrote: >>> I think the real problem occurs where we realloc the array bigger. > >> Looking at the surroundings, I think that it would be nice to have >> pqAddTuple and PQsetvalue set an error message with this patch. > > Yeah, I was thinking about that myself - the existing design presumes > that the only possible reason for failure of pqAddTuple is OOM, but > it'd be better to distinguish "too many tuples" from true OOM. So > we should also refactor to make pqAddTuple responsible for setting > the error message. Might be best to do the refactoring first. Attached are two patches: 1) 0001 refactors the code around pqAddTuple to be able to handle error messages and assign them in PQsetvalue particularly. 2) 0002 adds sanity checks in pqAddTuple for overflows, maximizing the size of what is allocated to INT_MAX but now more. pqRowProcessor() still has errmsgp, but it is never used on HEAD. At least with this set of patches it comes to be useful. We could rework check_field_number() to use as well an error message string, but I have left that out to keep things simple. Not sure if any complication is worth compared to just copying the error message in case of an unmatching column number. Attached is as well a small program I have used to test PQsetvalue through PQcopyResult to see if results get correctly allocated at each call, looking at the error message stacks on the way. -- Michael /* * Script to test PQcopyResult and subsequently PQsetvalue. * Compile with for example: * gcc -lpq -g -o pg_copy_res pg_copy_res.c */ #include #include #include "libpq-fe.h" #define DEFAULT_PORT "5432" #define DEFAULT_HOST "/tmp" #define DEFAULT_DB "postgres" int main() { char *port = getenv("PGPORT"); char *host = getenv("PGHOST"); char *dbname = getenv("PGDATABASE"); char connstr[512]; PGconn *conn; PGresult *res, *res_copy; if (port == NULL) port = DEFAULT_PORT; if (host == NULL) host = DEFAULT_HOST; if (dbname == NULL) dbname = DEFAULT_DB; snprintf(connstr, sizeof(connstr), "port=%s host=%s dbname=%s", port, host, dbname); conn = PQconnectdb(connstr); if (PQstatus(conn) != CONNECTION_OK) { fprintf(stderr, "Connection to database failed: %s", PQerrorMessage(conn)); return 1; } res = PQexec(conn, "SELECT 1"); /* Copy the resuld wanted, who care what that is... */ res_copy = PQcopyResult(res, PG_COPYRES_TUPLES | PG_COPYRES_ATTRS); PQclear(res); PQclear(res_copy); PQfinish(conn); return 0; } 0001-Refactor-error-message-handling-in-pqAddTuple.patch Description: Binary data 0002-Improve-overflow-checks-of-pqAddTuple-in-libpq.patch Description: Binary data -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general