Re: [GENERAL] Crash in 9.4 Beta when partially collapsing left outer joins
Michael Paquier writes: > On Tue, Sep 9, 2014 at 10:25 PM, Tom Lane wrote: >> The reason for the assert is that there should never be an OR directly >> underneath an OR in the planner after eval_const_expressions has flattened >> such cases. Evidently commit f343a88 failed to preserve AND/OR flatness >> in some cases :-(. That code should be taught to do so, rather than >> lobotomizing this assertion. Lack of flatness causes optimization >> inefficiencies, which is why we don't want to just allow it. > Ah, OK, I just saw your commit. so the trick is to add the arguments > of subclause in case of an OR clause found to have a correct > flattening here... Thanks! Right. If you look again at that code in orclauses.c, you'll notice that it is itself assuming AND/OR flatness in its input. We could discard that assumption, but it would just mean moving complexity from the places that currently have to preserve flatness to other places. For instance, right now we suppose that all "top level" WHERE clauses are in the top-level AND list ... if we had to check for sub-AND clauses and recurse into those, it would make life complicated in numerous places. I do wonder, having seen this bug, if there's someplace we could add assertions to check for AND/OR flatness that'd be more certainly hit by a violation. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Async IO HTTP server frontend for PostgreSQL
Hi Dmitriy, are you able to say a little about what's driving your quest for async http-to-pg ? I'm curious as to the motivations, and whether they match up with some of my own reasons for wanting to use low-thread-count solutions. Thanks. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Crash in 9.4 Beta when partially collapsing left outer joins
On Tue, Sep 9, 2014 at 10:25 PM, Tom Lane wrote: > Michael Paquier writes: >> The logic for nested OR is correct by reading it, hence why not simply >> removing the assertion failing? The attached patch 1 does so. > > The reason for the assert is that there should never be an OR directly > underneath an OR in the planner after eval_const_expressions has flattened > such cases. Evidently commit f343a88 failed to preserve AND/OR flatness > in some cases :-(. That code should be taught to do so, rather than > lobotomizing this assertion. Lack of flatness causes optimization > inefficiencies, which is why we don't want to just allow it. Ah, OK, I just saw your commit. so the trick is to add the arguments of subclause in case of an OR clause found to have a correct flattening here... Thanks! -- Michael -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Convincing STABLE functions to run once
On 09 Sep 2014, at 17:23, Dan Wells wrote: > I often have functions which are not truly immutable (they do something > minor, like read in configuration information), but the functions themselves > are fairly expensive, so I want them to run just once per query. At face > value, I feel like STABLE should do what I want, but often it does not. Assuming that the part that makes these functions expensive is not the reading of the configuration information, perhaps you can split your functions such that the expensive part goes into an IMMUTABLE function that takes those (STABLE) configuration values as input? Alban Hertroys -- If you can't see the forest for the trees, cut the trees and you'll find there is no forest. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Pgpool starting problem
Ellen, To date I have no solution. I'm currently trying to build a debug-able version of 3.3.4 as I think I have a network problem which is preventing pgpool from being a happy shareable system. Each one currently thinks it's a primary upon startup, if it will start at all. Sent from my iPad > On Sep 9, 2014, at 2:42 PM, "Ellen [via PostgreSQL]" > wrote: > > Hi Jay, > Can you pls tell me how you resolved this issue. > We are running pgpool-II version 3.3.3 > Thanks. > Ellen > > If you reply to this email, your message will be added to the discussion > below: > http://postgresql.1045698.n5.nabble.com/Pgpool-starting-problem-tp5803276p5818354.html > To unsubscribe from Pgpool starting problem, click here. > NAML -- View this message in context: http://postgresql.1045698.n5.nabble.com/Pgpool-starting-problem-tp5803276p5818370.html Sent from the PostgreSQL - general mailing list archive at Nabble.com.
Re: [GENERAL] Pgpool starting problem
Hi Jay, Can you pls tell me how you resolved this issue. We are running pgpool-II version 3.3.3 Thanks. Ellen -- View this message in context: http://postgresql.1045698.n5.nabble.com/Pgpool-starting-problem-tp5803276p5818354.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Issue with to_timestamp function
On 09/08/2014 04:18 PM, Lou Oquin wrote: Jerry; When I run the query you supplied, with my database select sli.ts::timestamptz as tstamp from public.sql_log_import sli where sli.id <= 10; I get the following error: ERROR: invalid input syntax for type timestamp with time zone: "08/06/2014 03:08:58" Aah, just realized something. When you run the query I sent in my last post I bet you will find the values in sli.ts where entered with double quotes: "08/06/2014 03:08:58" So: test=> create table ts_test (fld_1 text); CREATE TABLE test=> insert into ts_test values ('"08/06/2014 03:08:58"'); INSERT 0 1 test=> select * from ts_test ; fld_1 --- "08/06/2014 03:08:58" (1 row) test=> select fld_1::timestamptz from ts_test ; ERROR: invalid input syntax for type timestamp with time zone: ""08/06/2014 03:08:58"" ** Error ** ERROR: invalid input syntax for type timestamp with time zone: "08/06/2014 03:08:58" SQL state: 22007 Thanks Lou -- Adrian Klaver adrian.kla...@aklaver.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Convincing STABLE functions to run once
On Tue, Sep 9, 2014 at 10:23 AM, Dan Wells wrote: > Hello all, > > I’ve run into this issue in several contexts recently, and wonder if folks > here can help clear up my understanding of function volatility. I often > have functions which are not truly immutable (they do something minor, like > read in configuration information), but the functions themselves are fairly > expensive, so I want them to run just once per query. At face value, I feel > like STABLE should do what I want, but often it does not. Here is a simple > example of what I am talking about (tested on 9.1.9): > > -- > CREATE TABLE t1(id INT PRIMARY KEY, val TEXT); > > -- Using numbers as "text" for convenience > INSERT INTO t1 SELECT generate_series(1,1000), random() * 1000; > > -- The real function reads configuration from the DB, and so > -- cannot be truthfully IMMUTABLE > -- > -- This function returns 'text' to better match my real case, > -- but is otherwise just for demonstration > -- > CREATE OR REPLACE FUNCTION passthru(myval text) > RETURNS text > LANGUAGE plpgsql > STABLE STRICT > AS $function$ > DECLARE > BEGIN > RAISE NOTICE 'test'; > RETURN myval; > END; > $function$ > ; This is kinda off topic but I'd like to point out your 'passthru' function is a wonderful debugging trick. I write it like this: CREATE OR REPLACE FUNCTION Notice(anyelement) RETURNS anyelement AS $$ BEGIN RAISE NOTICE '%', $1; RETURN $1; END; $$ LANGUAGE PLPGSQL; The reason why that's so useful is that when you have complicated functions that depend on each other it can be kind of a pain to adjust complicated SQL so that it 'raise notices' values you'd want to see -- the passthrough function makes it a snap without adjusting query behavior. merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Issue with to_timestamp function
On 09/08/2014 04:18 PM, Lou Oquin wrote: Jerry; When I run the query you supplied, with my database select sli.ts::timestamptz as tstamp from public.sql_log_import sli where sli.id <= 10; I get the following error: ERROR: invalid input syntax for type timestamp with time zone: "08/06/2014 03:08:58" ** Error ** ERROR: invalid input syntax for type timestamp with time zone: "08/06/2014 03:08:58" SQL state: 22007 So what do you get if you do: select sli.ts from public.sql_log_import sli where sli.id <= 10; Thanks Lou -- Adrian Klaver adrian.kla...@aklaver.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] stackbuilder
On 9/9/2014 3:36 AM, Ramesh T wrote: I had installed pgadmin3 but not selected stackbuilder ,let me know how to add stackbuilder to pgadmin3 for additional addons.. afaik, you'll need to run the enterprisedb postgres installer again to add stackbuilder -- john r pierce 37N 122W somewhere on the middle of the left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Convincing STABLE functions to run once
Dan Wells writes: > I've run into this issue in several contexts recently, and wonder if > folks here can help clear up my understanding of function volatility. I > often have functions which are not truly immutable (they do something > minor, like read in configuration information), but the functions > themselves are fairly expensive, so I want them to run just once per > query. At face value, I feel like STABLE should do what I want, but > often it does not. STABLE tells the system it's *okay* to run the function fewer times than naive SQL semantics might suggest. There's no *guarantee* that any such optimization will happen (and in fact, about the only special thing that currently happens for STABLE functions is that they're considered okay to use in indexscan qualifications). What I'd suggest is sticking the expensive function call into a CTE (a WITH clause). We do guarantee only-once eval for CTEs. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Issue with to_timestamp function
Jerry; When I run the query you supplied, with my database select sli.ts::timestamptz as tstamp from public.sql_log_import sli where sli.id <= 10; I get the following error: ERROR: invalid input syntax for type timestamp with time zone: "08/06/2014 03:08:58" ** Error ** ERROR: invalid input syntax for type timestamp with time zone: "08/06/2014 03:08:58" SQL state: 22007 Thanks Lou -Original Message- From: Jerry Sievers [mailto:gsiever...@comcast.net] Sent: Monday, September 08, 2014 2:31 PM To: Lou Oquin Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] Issue with to_timestamp function Lou Oquin writes: > Ive imported a csv export of an MS SQL Server log file into a staging table > on my local install of Postgresql (9.3/UTF8 encoding) for analysis. > > The staging table definition is: > > CREATE TABLE sql_log_import > > ( > > id serial NOT NULL, > > ts text, -- will convert to ts when merging into sql_server_logs > > source character varying(30), > > severity character varying(20), > > message text, > > CONSTRAINT sql_log_import_pk PRIMARY KEY (id) > > ) > > WITH ( > > OIDS=FALSE > > ); > > ALTER TABLE sql_log_import > > OWNER TO postgres; > > COMMENT ON COLUMN sql_log_import.ts IS 'will convert to ts when > merging into sql_server_logs'; > > Heres a copy of the first few lines of the data imported to table > sql_log_import: > > 08/06/2014 03:08:58,spid4s,Unknown,SQL Trace was stopped due to server > shutdown. Trace ID = '1'. This is an informational message only; no user > action is required. > > 08/06/2014 03:08:58,spid12s,Unknown,The current event was not reported > to the Windows Events log. Operating system error = 1717(The interface is > unknown.). You may need to clear the Windows Events log if it is full. > > 08/06/2014 03:08:58,spid12s,Unknown,Error: 17054 Severity: 16 State: > 1. > > 08/06/2014 03:08:58,Server,Unknown,The connection has been lost with > Microsoft Distributed Transaction Coordinator (MS DTC). Recovery of any > in-doubt distributed transactions involving Microsoft Distributed Transaction > Coordinator (MS DTC) will begin once the connection is re-established. This > is an informational message only. > No user action is required. > > 08/06/2014 03:08:58,spid12s,Unknown,Service Broker manager has shut down. > > 08/06/2014 03:08:55,Server,Unknown,SQL Server is terminating because of a > system shutdown. This is an informational message only. No user action is > required. > > The final table is very similar, but with a timestamp with timezone > field for the logged server data. But, when I try to populate the target > table with data from the staging table, I keep getting an error. The issue > is associated with the to_timestamp function. Ok but why not you just cast since the input data is compatible anyway, at least from what I saw up there... sj$ psql -efq --no-psqlrc begin; BEGIN create temp table foo as select '08/06/2014 03:08:58'::text as ts; SELECT 1 Table "pg_temp_7.foo" Column | Type | Modifiers +--+--- ts | text | select ts::timestamptz from foo; ts 2014-08-06 03:08:58-05 (1 row) sj$ > > Heres what Im seeing: If I use to_timestamp with the text data > (copied from table sql_log_import.ts), the select statement returns a > timestamp with timezone, as > expected: > > -- Executing query: > > select to_timestamp('08/06/2014 03:08:58', 'MM/DD/ > hh24:mi:ss')::timestamp with time zone as tstamp > > Total query runtime: 78 ms. > > 1 row retrieved. > > But, when I select data from the table sql_log_import, I get an error: > > -- Executing query: > > select to_timestamp(ts, 'MM/DD/ hh24:mi:ss')::timestamp with time > zone as tStamp > > from sql_log_import > > where id <= 10 > > ** Error ** > > SQL state: 22007 > > Detail: Value must be an integer. > > Any Ideas? > > Thanks > > Lou OQuin > -- Jerry Sievers Postgres DBA/Development Consulting e: postgres.consult...@comcast.net p: 312.241.7800 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Issue with to_timestamp function
I'm executing the query in pgAdmin3, in a SQL query window. The results are coming from the history tab of the output pane. Thanks Lou -Original Message- From: Adrian Klaver [mailto:adrian.kla...@aklaver.com] Sent: Monday, September 08, 2014 2:47 PM To: Lou Oquin; pgsql-general@postgresql.org Subject: Re: [GENERAL] Issue with to_timestamp function On 09/08/2014 01:52 PM, Lou Oquin wrote: > I've imported a csv export of an MS SQL Server log file into a staging > table on my local install of Postgresql (9.3/UTF8 encoding) for analysis. > > The staging table definition is: > > CREATE TABLE sql_log_import > > ( > >id serial NOT NULL, > >ts text, -- will convert to ts when merging into sql_server_logs > >source character varying(30), > >severity character varying(20), > >message text, > >CONSTRAINT sql_log_import_pk PRIMARY KEY (id) > > ) > > WITH ( > >OIDS=FALSE > > ); > > ALTER TABLE sql_log_import > >OWNER TO postgres; > > COMMENT ON COLUMN sql_log_import.ts IS 'will convert to ts when > merging into sql_server_logs'; > > Here's a copy of the first few lines of the data imported to table > sql_log_import: > > 08/06/2014 03:08:58,spid4s,Unknown,SQL Trace was stopped due to server > shutdown. Trace ID = '1'. This is an informational message only; no > user action is required. > > 08/06/2014 03:08:58,spid12s,Unknown,The current event was not reported > to the Windows Events log. Operating system error = 1717(The interface > is unknown.). You may need to clear the Windows Events log if it is full. > > 08/06/2014 03:08:58,spid12s,Unknown,Error: 17054 Severity: 16 > State: 1. > > 08/06/2014 03:08:58,Server,Unknown,The connection has been lost with > Microsoft Distributed Transaction Coordinator (MS DTC). Recovery of > any in-doubt distributed transactions involving Microsoft Distributed > Transaction Coordinator (MS DTC) will begin once the connection is > re-established. This is an informational message only. No user action > is required. > > 08/06/2014 03:08:58,spid12s,Unknown,Service Broker manager has shut down. > > 08/06/2014 03:08:55,Server,Unknown,SQL Server is terminating because > of a system shutdown. This is an informational message only. No user > action is required. > > The final table is very similar, but with a timestamp with timezone > field for the logged server data. But, when I try to populate the > target table with data from the staging table, I keep getting an error. > The issue is associated with the to_timestamp function. > > Here's what I'm seeing: If I use to_timestamp with the text data > (copied from table sql_log_import.ts), the select statement returns a > timestamp with timezone, as expected: > > -- Executing query: > > select to_timestamp('08/06/2014 03:08:58', 'MM/DD/ > hh24:mi:ss')::timestamp with time zone as tstamp > > Total query runtime: 78 ms. > > 1 row retrieved. > > But, when I select data from the table sql_log_import, I get an error: > > -- Executing query: > > select to_timestamp(ts, 'MM/DD/ hh24:mi:ss')::timestamp with time > zone as tStamp > > from sql_log_import > > where id <= 10 > > ** Error ** > > SQL state: 22007 > > Detail: Value must be an integer. > > Any Ideas? Where are you getting this error? Or to put it another way, where are you executing the query? > > Thanks > > *Lou O'Quin* > -- Adrian Klaver adrian.kla...@aklaver.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Issue with to_timestamp function
The data is ts 08/06/2014 03:08:58 08/06/2014 03:08:58 08/06/2014 03:08:58 Thanks Lou From: Melvin Davidson [mailto:melvin6...@gmail.com] Sent: Monday, September 08, 2014 2:30 PM To: Lou Oquin Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] Issue with to_timestamp function I suspect your data is not what you think it is. What do you see when you do SELECT ts FROM from sql_log_import LIMIT 3; On Mon, Sep 8, 2014 at 4:52 PM, Lou Oquin mailto:loq...@nammotalley.com>> wrote: I’ve imported a csv export of an MS SQL Server log file into a staging table on my local install of Postgresql (9.3/UTF8 encoding) for analysis. The staging table definition is: CREATE TABLE sql_log_import ( id serial NOT NULL, ts text, -- will convert to ts when merging into sql_server_logs source character varying(30), severity character varying(20), message text, CONSTRAINT sql_log_import_pk PRIMARY KEY (id) ) WITH ( OIDS=FALSE ); ALTER TABLE sql_log_import OWNER TO postgres; COMMENT ON COLUMN sql_log_import.ts IS 'will convert to ts when merging into sql_server_logs'; Here’s a copy of the first few lines of the data imported to table sql_log_import: 08/06/2014 03:08:58,spid4s,Unknown,SQL Trace was stopped due to server shutdown. Trace ID = '1'. This is an informational message only; no user action is required. 08/06/2014 03:08:58,spid12s,Unknown,The current event was not reported to the Windows Events log. Operating system error = 1717(The interface is unknown.). You may need to clear the Windows Events log if it is full. 08/06/2014 03:08:58,spid12s,Unknown,Error: 17054 Severity: 16 State: 1. 08/06/2014 03:08:58,Server,Unknown,The connection has been lost with Microsoft Distributed Transaction Coordinator (MS DTC). Recovery of any in-doubt distributed transactions involving Microsoft Distributed Transaction Coordinator (MS DTC) will begin once the connection is re-established. This is an informational message only. No user action is required. 08/06/2014 03:08:58,spid12s,Unknown,Service Broker manager has shut down. 08/06/2014 03:08:55,Server,Unknown,SQL Server is terminating because of a system shutdown. This is an informational message only. No user action is required. The final table is very similar, but with a timestamp with timezone field for the logged server data. But, when I try to populate the target table with data from the staging table, I keep getting an error. The issue is associated with the to_timestamp function. Here’s what I’m seeing: If I use to_timestamp with the text data (copied from table sql_log_import.ts), the select statement returns a timestamp with timezone, as expected: -- Executing query: select to_timestamp('08/06/2014 03:08:58', 'MM/DD/ hh24:mi:ss')::timestamp with time zone as tstamp Total query runtime: 78 ms. 1 row retrieved. But, when I select data from the table sql_log_import, I get an error: -- Executing query: select to_timestamp(ts, 'MM/DD/ hh24:mi:ss')::timestamp with time zone as tStamp from sql_log_import where id <= 10 ** Error ** SQL state: 22007 Detail: Value must be an integer. Any Ideas? Thanks Lou O’Quin -- Melvin Davidson I reserve the right to fantasize. Whether or not you wish to share my fantasy is entirely up to you. [http://us.i1.yimg.com/us.yimg.com/i/mesg/tsmileys2/01.gif]
[GENERAL] stackbuilder
Hi, I had installed pgadmin3 but not selected stackbuilder ,let me know how to add stackbuilder to pgadmin3 for additional addons.. thanks, ram
[GENERAL] Convincing STABLE functions to run once
Hello all, I've run into this issue in several contexts recently, and wonder if folks here can help clear up my understanding of function volatility. I often have functions which are not truly immutable (they do something minor, like read in configuration information), but the functions themselves are fairly expensive, so I want them to run just once per query. At face value, I feel like STABLE should do what I want, but often it does not. Here is a simple example of what I am talking about (tested on 9.1.9): -- CREATE TABLE t1(id INT PRIMARY KEY, val TEXT); -- Using numbers as "text" for convenience INSERT INTO t1 SELECT generate_series(1,1000), random() * 1000; -- The real function reads configuration from the DB, and so -- cannot be truthfully IMMUTABLE -- -- This function returns 'text' to better match my real case, -- but is otherwise just for demonstration -- CREATE OR REPLACE FUNCTION passthru(myval text) RETURNS text LANGUAGE plpgsql STABLE STRICT AS $function$ DECLARE BEGIN RAISE NOTICE 'test'; RETURN myval; END; $function$ ; EXPLAIN ANALYZE select * from t1 where val like passthru('55') || '%'; ALTER FUNCTION passthru(text) IMMUTABLE; EXPLAIN ANALYZE select * from t1 where val like passthru('55') || '%'; - If you run this, you should see two things: 1) When STABLE, the function still runs many, many times (see notices), despite having a fixed input. 2) When switching to IMMUTABLE, the function runs just once (as expected) and the query is orders of magnitude faster. Is STABLE working as it should in this example? I've searched around, and in some threads I see explanations that STABLE only /allows/ the planner to run the function once, but the planner is free to run it as many times as it sees fit. If this is the case, is there a way to alter the function to tell the planner, "trust me, you only want to run this once per query"? In effect, it seems I want something between IMMUTABLE and the current interpretation of STABLE, maybe a SUPERSTABLE designation or something. I have also seen that wrapping the function in a subselect is a workaround, but it seems unusual to require such a workaround for what seems like a common need. Thanks for any insight you might have! Sincerely, Dan
Re: [GENERAL] psql and tab-delimited output
On 09/08/2014 11:45 AM, Abelard Hoffman wrote: Hi Alban. On Sun, Sep 7, 2014 at 4:18 AM, Alban Hertroys mailto:haram...@gmail.com>> wrote: On 07 Sep 2014, at 10:45, Abelard Hoffman mailto:abelardhoff...@gmail.com>> wrote: > For reports, everyone else mostly uses other tools? I'd like to stay away from GUI-tools, if possible. For reporting, usually you use the data in the database directly. A TSV or CSV file is not a report, it’s at best a data source for your report. Going through an intermediate format is not a particularly effective approach to create reports, but if you have to (for example because you aren’t _allowed_ access to the database), generally preferred formats seem to be CSV, XML or JSON; as long as it’s a standard format. TSV is not a common choice. Are you sure your boss actually cares that it’s TSV and not, for example, CSV? Could you expand on that a bit? What sort of tools does management use to generate reports from the database directly? You're meaning a database warehouse? We just have an OLTP db, so we've always generated reports periodically through cron jobs. Or maybe "reports" is the wrong word. We generate a bunch of db stats which can then be used however they want (pulled into Excel, etc.). But would definitely be interested in learning about other approaches. Ways I have done it: 1) In Excel use the data tools to run the query and return data directly to the spreadsheet. This assumes a version of Excel that has the data tools. 2) I use Python, so use psycopg2 to pull data from the database and then xlwt to write out a spreadsheet or reportlab to create a pdf or the csv module to create whatever flavor of CSV. And yes, I'm sure we could convert everything over to CSV. Just an issue of inertia. Thanks. -- Adrian Klaver adrian.kla...@aklaver.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Last active time for a database
On Tue, Sep 9, 2014 at 4:27 AM, Jeff Janes wrote: > Is there a way for a superuser to find the last time a database had an > active user connection? (While being logged into a different database in the > same instance, of course). > The context here is looking for looking for automated integration testing > databases that have been leaked due to bugs/crashes in the testing > framework. >From the client-side, not that I am aware of. pg_stat_activity reports only the active connections not an history of it. > Yes, I can mine the server log files, but that is really scraping the bottom > of the barrel. It depends on the log format, you could always couple a CSV log file using log_connection = on with file_fdw and query the logs directly, filtering connection messages. Another idea could be to use the hook in elog.c to track messages beginning by "connection authorized: " and perform some pre-processing on the error data context that could facilitate of tracking. That's not much different from the file_fdw solution with for example CSV format, still it may help with systems having lots of logs to avoid a lot of processing that file_fdw may do uselessly. My 2c. -- Michael -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Advisory lock grant order
Yossi Cohen writes: > If I request an advisory lock (pg_advisory_lock) with the same key from > several sessions; will the lock be granted in the same order as it was > requested? Usually. IIRC, the lock code will grant locks out-of-order if a deadlock would result without it. There might be some other exceptions but I don't recall any. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Crash in 9.4 Beta when partially collapsing left outer joins
Michael Paquier writes: > The logic for nested OR is correct by reading it, hence why not simply > removing the assertion failing? The attached patch 1 does so. The reason for the assert is that there should never be an OR directly underneath an OR in the planner after eval_const_expressions has flattened such cases. Evidently commit f343a88 failed to preserve AND/OR flatness in some cases :-(. That code should be taught to do so, rather than lobotomizing this assertion. Lack of flatness causes optimization inefficiencies, which is why we don't want to just allow it. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Async IO HTTP server frontend for PostgreSQL
2014-09-09 1:28 GMT+04:00 Merlin Moncure : > On Mon, Sep 8, 2014 at 12:59 PM, Dmitriy Igrishin > wrote: > > Dear community, > > > > I need a %subj% -- high performance HTTP server solution > > based on asynchronous IO with ability to run PostgreSQL's > > functions from HTML templates asynchronously and passing > > the results to the HTTP client. > > For example, consider a simple template: > > > > > > ${get_rows(id := :id)} > > > > > > The function get_rows() will be called asynchronously > > during the dispatching HTTP request and the result of > > it will streams immediately to the HTTP client via async IO. > > > > Currently, I've found only a module for NGINX > > https://github.com/FRiCKLE/ngx_postgres > > but it does not what I need. > > > > Ideally, it should be a simple C (libevent based) or C++ > > (boost::asio based) library. > > > > Please, if anyone has a ready solution of the idea described > > above, let me know, because I don't want waste my time to write > > it from scratch. > > It's not in C, but you should take a very good look at node.js. > > merlin > Yeah, it looks interesting and AFAIK there are already bindings for node.js to asynchronous libpq's API -- https://github.com/brianc/node-postgres/blob/master/src/binding.cc#L43 Thanks for the point, Merlin. -- // Dmitriy.
[GENERAL] Advisory lock grant order
Hi, If I request an advisory lock (pg_advisory_lock) with the same key from several sessions; will the lock be granted in the same order as it was requested? I.e. if for example: session 1: select pg_advisory_lock(1); -- acquires the lock then session 2: select pg_advisory_lock(1); -- blocks waiting for the lock then session 3: select pg_advisory_lock(1); -- blocks waiting for the lock then session 1: select pg_advisory_unlock(1); -- releases the lock Is it guaranteed that now session 2 will be granted the lock because it requested the lock before session 3? Thanks, Yossi
Re: [GENERAL] Crash in 9.4 Beta when partially collapsing left outer joins
On Tue, Sep 9, 2014 at 2:43 PM, Michael Paquier wrote: > Some bisecting is showing as well that the commit at the origin of the > regression is f343a88. The failure is caused by an assertion not happy since this commit: frame #4: 0x000101d20670 postgres`generate_bitmap_or_paths(root=0x7fd61d004d48, rel=0x7fd61c033a58, clauses=0x7fd61d010200, other_clauses=0x) + 480 at indxpath.c:1213 frame #5: 0x000101d1fc37 postgres`create_index_paths(root=0x7fd61d004d48, rel=0x7fd61c033a58) + 1255 at indxpath.c:314 frame #6: 0x000101d1146b postgres`set_plain_rel_pathlist(root=0x7fd61d004d48, rel=0x7fd61c033a58, rte=0x7fd61c033c88) + 75 at allpaths.c:397 While reading the code of this commit, I noticed that extract_or_clause has added some logic for nested OR clauses: it extracts their content and adds them directly to the list of subclauses that are then used by generate_bitmap_or_paths, triggering the assertion failure reported by the trace above. The logic for nested OR is correct by reading it, hence why not simply removing the assertion failing? The attached patch 1 does so. Another approach would consist in removing the nested OR part and keep the old assertion logic, like in the patch 2 attached, but this seems like a no-go as f343a88 has actually improved nested OR tracking. Thoughts? Note: I added as well a regression tests in patch 1 as this is IMO the correct approach, if that's considered as correct of course :) -- Michael From 78612c5c80d57b297ef93e992874b571e9bf0f75 Mon Sep 17 00:00:00 2001 From: Michael Paquier Date: Tue, 9 Sep 2014 16:43:41 +0900 Subject: [PATCH] Fix Assertion failure caused by nested OR at extraction Commit f343a88 has added some logic in extract_or_clause to extract OR subclauses, while the index path code in planner is wrongly assuming that subclauses cannot be OR clauses themselves. Per report from Benjamin Smith --- src/backend/optimizer/path/indxpath.c | 1 - src/test/regress/expected/join.out| 21 + src/test/regress/sql/join.sql | 3 +++ 3 files changed, 24 insertions(+), 1 deletion(-) diff --git a/src/backend/optimizer/path/indxpath.c b/src/backend/optimizer/path/indxpath.c index 42dcb11..88bf946 100644 --- a/src/backend/optimizer/path/indxpath.c +++ b/src/backend/optimizer/path/indxpath.c @@ -1210,7 +1210,6 @@ generate_bitmap_or_paths(PlannerInfo *root, RelOptInfo *rel, List *orargs; Assert(IsA(orarg, RestrictInfo)); -Assert(!restriction_is_or_clause((RestrictInfo *) orarg)); orargs = list_make1(orarg); indlist = build_paths_for_OR(root, rel, diff --git a/src/test/regress/expected/join.out b/src/test/regress/expected/join.out index 1cb1c51..5079ba0 100644 --- a/src/test/regress/expected/join.out +++ b/src/test/regress/expected/join.out @@ -2827,6 +2827,27 @@ select * from tenk1 a join tenk1 b on Index Cond: (unique2 = 3) (12 rows) +explain (costs off) +select * from tenk1 a join tenk1 b on + a.unique1 = 1 or ((a.unique1 = 2 or a.unique1 = 3) and b.ten = 4); + QUERY PLAN + + Nested Loop + Join Filter: ((a.unique1 = 1) OR (((a.unique1 = 2) OR (a.unique1 = 3)) AND (b.ten = 4))) + -> Seq Scan on tenk1 b + -> Materialize + -> Bitmap Heap Scan on tenk1 a + Recheck Cond: ((unique1 = 1) OR ((unique1 = 2) OR (unique1 = 3))) + -> BitmapOr + -> Bitmap Index Scan on tenk1_unique1 + Index Cond: (unique1 = 1) + -> BitmapOr + -> Bitmap Index Scan on tenk1_unique1 + Index Cond: (unique1 = 2) + -> Bitmap Index Scan on tenk1_unique1 + Index Cond: (unique1 = 3) +(14 rows) + -- -- test placement of movable quals in a parameterized join tree -- diff --git a/src/test/regress/sql/join.sql b/src/test/regress/sql/join.sql index fa3e068..c170b09 100644 --- a/src/test/regress/sql/join.sql +++ b/src/test/regress/sql/join.sql @@ -774,6 +774,9 @@ select * from tenk1 a join tenk1 b on explain (costs off) select * from tenk1 a join tenk1 b on (a.unique1 = 1 and b.unique1 = 2) or (a.unique2 = 3 and b.ten = 4); +explain (costs off) +select * from tenk1 a join tenk1 b on + a.unique1 = 1 or ((a.unique1 = 2 or a.unique1 = 3) and b.ten = 4); -- -- test placement of movable quals in a parameterized join tree -- 2.1.0 diff --git a/src/backend/optimizer/util/orclauses.c b/src/backend/optimizer/util/orclauses.c index 9e954d0..387a308 100644 --- a/src/backend/optimizer/util/orclauses.c +++ b/src/backend/optimizer/util/orclauses.c @@ -190,21 +190,8 @@ extract_or_clause(RestrictInfo *or_rinfo, RelOptInfo *rel) RestrictInfo *rinfo = (