Re: [GENERAL] How to store multiple rows in array .

2017-11-19 Thread Tom Lane
brahmesr  writes:
> SELECT ARRAY (SELECT ROW (COL1,COL2, COUNT(*) *AS txn_cnt* )::
> ap.validate_crtr_line_items$inv_lines_rt FROM Distinct_Records GROUP BY
> COL1, COL2  HAVING COUNT(*) > 1) INTO L_INV_LINES;

> ERROR:  syntax error at or near "AS"
> LINE 73: COL1,COL2, COUNT(*) AS txn_cnt...

> Why "AS" is throwing an error ?

"AS" is part of SELECT-list syntax, not ROW(...) syntax.

Even if it were allowed in ROW(), it would be totally pointless in
this context, because when you cast the ROW() result to the
ap.validate_crtr_line_items$inv_lines_rt composite type, that type
is what determines the column names.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] parallel query in 9.6.6 + oracle_fdw -> ERROR: invalid cache ID: 41

2017-11-19 Thread Tom Lane
Chris Mair  writes:
> Whenever a session has performed a query on a foreign table, any subsequent
> query on a local table big enough to use the parallel query feature exits with
> an error:
>  ERROR:  invalid cache ID: 41
>  CONTEXT:  parallel worker

Hm, syscache 41 is PROCNAMEARGSNSP in 9.6, if I counted right.

> (gdb) bt
> #0  0x7f16a0f4d1f7 in raise () from /lib64/libc.so.6
> #1  0x7f16a0f4e8e8 in abort () from /lib64/libc.so.6
> #2  0x008094b4 in errfinish (dummy=dummy@entry=0) at elog.c:557
> #3  0x0080aea2 in elog_finish (elevel=elevel@entry=22, 
> fmt=fmt@entry=0x9d9965 "invalid cache ID: %d") at elog.c:1378
> #4  0x007ffd82 in SearchSysCacheList (cacheId=cacheId@entry=41, 
> nkeys=nkeys@entry=2, key1=key1@entry=139734905138463, key2=, 
> key3=key3@entry=0, key4=key4@entry=0) at syscache.c:1210
> #5  0x7f169161a59f in _PG_init () at oracle_fdw.c:709
> #6  0x0080c476 in internal_load_library (libname=, 
> libname@entry=0x7f16a1bfdde8 ) at 
> dfmgr.c:276
> #7  0x0080c708 in RestoreLibraryState (start_address=0x7f16a1bfdde8 
> ) at dfmgr.c:741
> #8  0x004e72cf in ParallelWorkerMain (main_arg=) at 
> parallel.c:1069

Apparently, oracle_fdw is trying to do a procedure lookup in its _PG_init
function.  This is a horrible idea: it assumes that _PG_init is invoked
inside a transaction, which is wrong if the library is preloaded, for
example.  (I'd bet that adding oracle_fdw to shared_preload_libraries
would fail badly, though perhaps not with this exact error message.)

So I'd call this an oracle_fdw bug.  It needs to postpone what it's
doing here to the first normal FDW function call in a session.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to store multiple rows in array .

2017-11-19 Thread Tom Lane
Pavel Stehule  writes:
> 2017-11-19 18:57 GMT+01:00 Brahmam Eswar :
>> How to collect multiple columns into array which is composite data type of
>> all select colums

> SELECT ARRAY(SELECT ROW(col1, col2, ...) INTO

You probably need an explicit cast to the rowtype.  That is,

declare myarray rowtypename[];
...
select array(select row(col1, ...)::rowtypename from ...) into myarray;

        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread Tom Lane
Andres Freund  writes:
> It doesn't seem impossible to get into a situation where syslogger is
> the source of the OOM. Just enabling a lot of logging in a workload with
> many large query strings might do it.  So making it less likely to be
> killed might make the problem worse...

Hm, so that's another angle David didn't report on: is it possible that
his workload could have resulted in a very large volume of incomplete
in-progress log messages?

        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread Tom Lane
Andres Freund  writes:
> On 2017-11-16 21:39:49 -0500, Tom Lane wrote:
>> What might be worth thinking about is allowing the syslogger process to
>> inherit the postmaster's OOM-kill-proofness setting, instead of dropping
>> down to the same vulnerability as the postmaster's other child processes.

> Hm. I'm a bit scared about that - it doesn't seem that inconceivable
> that various backends log humongous multi-line messages, leading to
> syslogger *actually* taking up a fair amount of memory. Note that we're
> using plain stringinfos that ereport(ERROR) out of memory situations,
> rather than failing more gracefully.

True, but there's no hard limits on the postmaster's memory consumption
either ... and if the syslogger does get killed on such a basis, we have
at the least lost a bunch of log output.  On the whole I think we'd be
better off trying to prevent OOM kills on the syslogger.  (That doesn't
preclude other mitigation measures.)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread Tom Lane
Andres Freund  writes:
> On 2017-11-06 15:35:03 -0500, Tom Lane wrote:
>> David Pacheco  writes:
>>> I ran into what appears to be a deadlock in the logging subsystem.  It
>>> looks like what happened was that the syslogger process exited because it
>>> ran out of memory.  But before the postmaster got a chance to handle the
>>> SIGCLD to restart it, it handled a SIGUSR1 to start an autovacuum worker.
>>> That also failed, and the postmaster went to log a message about it, but
>>> it's blocked on the pipe that's normally connected to the syslogger,
>>> presumably because the pipe is full because the syslogger is gone and
>>> hasn't read from it.

>> Ugh.

> I'm somewhat inclined to say that one has to live with this if the
> system is so resource constrainted that processes barely using memory
> get killed.

David's report isn't too clear: did the syslogger process actually run
out of memory and exit of its own volition after an ENOMEM, or did it get
killed by the dreaded OOM killer?  In either case, it's unclear whether
it was really using an excessive amount of memory.  We have not heard
reports suggesting a memory leak in the syslogger, but maybe there is
one under unusual circumstances?

I think you're probably right that the real cause here is the OOM
killer just randomly seizing on the syslogger as a victim process;
although since the syslogger disconnects from shared memory, it's
not very clear why it would score high on the OOM killer's metrics.
The whole thing is definitely odd.

> We could work around a situation like that if we made postmaster use a
> *different* pipe as stderr than the one we're handing to normal
> backends. If postmaster created a new pipe and closed the read end
> whenever forking a syslogger, we should get EPIPEs when writing after
> syslogger died and could fall back to proper stderr or such.

I think that's nonsense, unfortunately.  If the postmaster had its
own pipe, that would reduce the risk of this deadlock because only
the postmaster would be filling that pipe, not the postmaster and
all its other children --- but it wouldn't eliminate the risk.
I doubt the increase in reliability would be enough to justify the
extra complexity and cost.

What might be worth thinking about is allowing the syslogger process to
inherit the postmaster's OOM-kill-proofness setting, instead of dropping
down to the same vulnerability as the postmaster's other child processes.
That presumes that this was an otherwise-unjustified OOM kill, which
I'm not quite sure of ... but it does seem like a situation that could
arise from time to time.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg 10 crashes on int8_avg_combine

2017-11-14 Thread Tom Lane
Dingyuan Wang  writes:
> PostgreSQL 10.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 7.2.0-8)
> 7.2.0, 64-bit
>
> (gdb) bt
> #0  int8_avg_combine (fcinfo=0x55bdb92472d8) at
> ./build/../src/backend/utils/adt/numeric.c:4285

I think this is the same issue being discussed at

https://www.postgresql.org/message-id/flat/20171110185747.31519.28038%40wrigleys.postgresql.org

        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] missing public on schema public

2017-11-14 Thread Tom Lane
=?iso-8859-1?Q?Bo_Thorbj=F8rn_Jensen?=  writes:
> I have some additional info and a fix.
> Firstly steps to reproduce:

Yeah, I can reproduce this.  I suspect it got broken by Stephen's hacking
around with default ACLs.  A simple example is

$ pg_dump -c -U postgres postgres | grep -i public
DROP SCHEMA public;
-- Name: public; Type: SCHEMA; Schema: -; Owner: postgres
CREATE SCHEMA public;
ALTER SCHEMA public OWNER TO postgres;
-- Name: SCHEMA public; Type: COMMENT; Schema: -; Owner: postgres
COMMENT ON SCHEMA public IS 'standard public schema';
-- Name: public; Type: ACL; Schema: -; Owner: postgres
GRANT ALL ON SCHEMA public TO PUBLIC;

That's fine, but if I shove it through an archive file:

$ pg_dump -f p.dump -Fc -U postgres postgres

$ pg_restore -c p.dump | grep -i public
DROP SCHEMA public;
-- Name: public; Type: SCHEMA; Schema: -; Owner: postgres
CREATE SCHEMA public;
ALTER SCHEMA public OWNER TO postgres;
-- Name: SCHEMA public; Type: COMMENT; Schema: -; Owner: postgres
COMMENT ON SCHEMA public IS 'standard public schema';

This is *REALLY BAD*.  Quite aside from the restore being wrong,
those two sequences should never ever give different results.
Stephen, you put some filtering logic in the wrong place in pg_dump.

    regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Retrieve the server's time zone

2017-11-14 Thread Tom Lane
Thomas Kellerer  writes:
> is there any way (short of writing a function in an untrusted PL) to 
> determine the actual time zone (or time) of the server OS? 

The default value of the timezone parameter is as close as you'll get
in modern versions of PG.

> "show timezone" always returns the client's time zone. 

AFAIK that would only be true if some part of your client stack
is issuing a SET TIMEZONE command.  (libpq will do that if it finds
a PGTZ environment variable set, but not in response to plain TZ.)

If that's true, and you can't/don't want to change it, you could try

select reset_val from pg_settings where name = 'TimeZone';

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL walsender process doesn't exist after "pg_ctl stop -m fast"

2017-11-13 Thread Tom Lane
y39chen  writes:
> We encounter one problem that PostgreSQL walsender process doesn't exist
> after "pg_ctl stop -m fast".
> Uses PostgreSQL 9.6.2
 
There was a fix in 9.6.4 that's at least related to this problem.
It would be interesting to see if you can still reproduce it on
current 9.6.

    regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 10.1 fails to start: server did not start in time

2017-11-12 Thread Tom Lane
Andres Freund  writes:
> we could really do better than just wonder whether our signal to
> shutdown was received or not.  There probably should be a quite short
> timeout for the server to change status, and then a much longer one for
> that shutdown to finish.

While I don't want to just raise the timeout, I could get behind a more
thorough rethinking of the behavior there.

    regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple unnests in query

2017-11-12 Thread Tom Lane
Aron Widforss  writes:
> Is this first query expected behavior? If so, what is the rationale?

The short answer is "because it's always worked that way".  You
might find the last half of section 37.4.8 illuminating:

https://www.postgresql.org/docs/devel/static/xfunc-sql.html#XFUNC-SQL-FUNCTIONS-RETURNING-SET

but if you're on a pre-v10 release, pay close attention to what it says
about the difference between v10 and pre-v10 behavior.

> I would have expected nine rows returned (as in my second example).

Your second example has approximately nothing to do with your first.
It has only one SRF in the SELECT list, so there's not much doubt
about what ought to happen.

        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 10.1 fails to start: server did not start in time

2017-11-12 Thread Tom Lane
Christoph Berg  writes:
> The default systemd timeout seems to be 90s. I have already changed
> the systemd timeout to infinity (start) and 1h (stop), so only the
> default pg_ctl timeout remains (60s), which I'd rather not override
> unilaterally.

> That said, isn't 60s way too small for shutting down larger clusters?
> And likewise for starting?

Well, that's tied into the fact that pg_ctl doesn't disturb the server's
state if it gives up waiting.  If it did, we would certainly use a larger
timeout or none at all.  I don't feel a big need to change that default,
but if you have a surrounding script that is going to take adverse action
after a timeout then you need to use a larger value ...

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 10.1 fails to start: server did not start in time

2017-11-12 Thread Tom Lane
Christoph Berg  writes:
> Re: Peter J. Holzer 2017-11-12 <20171112173559.m6chmbyf4vz6f...@hjp.at>
>> Wouldn't it be better to remove the timeout?

> If you don't want to block, don't depend on the database service. That
> question is independent from the timeout.

Agreed, but I think Peter has a point: why is there a timeout at all,
let alone one as short as 30 seconds?  Since systemd doesn't serialize
service starts unnecessarily, there seems little value in giving up
quickly.  And we know that cases such as crash recovery may take more
than that.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Difference between CAST(v AS t) and v::t

2017-11-11 Thread Tom Lane
Lele Gaifax  writes:
> while writing test cases for my SQL pretty printer tool[1], I found what seems
> a discrepancy in the "Type Casts" documentation[2]: it states that the two
> syntaxes are equivalent, but while

They are functionally equivalent ...

> EXCLUDE USING gist (CAST(company_id AS text) WITH =, validity WITH &&)
> works, the following
> EXCLUDE USING gist (company_id::text WITH =, validity WITH &&)
> is rejected with the message "syntax error at or near "::"".

... but expression-index syntax has the restriction that you need
parentheses around an expression unless it is, or at least looks like,
a function call.  CAST() looks enough like a function call for this
purpose, v::t does not.

I think there is relevant documentation for this near CREATE INDEX;
it doesn't seem like the province of the typecast docs to explain
the weirdnesses of index syntax.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 10.1 fails to start: server did not start in time

2017-11-10 Thread Tom Lane
Adam Brusselback  writes:
>> You might want to increase pg_ctl's wait timeout for this situation,
>> since the default's evidently too little.  However ...

> Got it, thanks.

>> ... pg_ctl itself wouldn't decide to forcibly shut down the server
>> if the timeout expired.  It merely stops waiting and tells you so.
>> It seems like this must represent misdesign of whatever start script
>> you're using.  I think you need to complain to the Debian packagers
>> about that.

> Hmm, interesting.  I installed from this repo:
> http://apt.postgresql.org/pub/repos/apt/ stretch-pgdg main
> I was assuming someone in the Postgres project was involved in
> packaging it.  Do you know who I should reach out to in that case?

Christoph's probably a good place to start.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 10.1 fails to start: server did not start in time

2017-11-10 Thread Tom Lane
Adam Brusselback  writes:
> I am in the process of upgrading to Postgres 10, and am having trouble
> getting my streaming replica working.
> OS: Debian 9.2
> Version: 10.1
> I have my primary backed up using pgbackrest, and I restore that to my
> replica.  It generates a recovery.conf which has a restore command for
> the WAL to pull them from the pgbackrest server.
> The recovery succeeds, but when I go to start the cluster on the
> standby, it begins to replay the WAL, and does so for about 30
> seconds.  Then I get a line in my log saying:

>> pg_ctl: server did not start in time

You might want to increase pg_ctl's wait timeout for this situation,
since the default's evidently too little.  However ...

> Followed by:
>> 2017-11-10 20:27:35.907 UTC [7132] LOG:  received smart shutdown request
>> ERROR [063]: : terminated on signal [SIGTERM]

... pg_ctl itself wouldn't decide to forcibly shut down the server
if the timeout expired.  It merely stops waiting and tells you so.
It seems like this must represent misdesign of whatever start script
you're using.  I think you need to complain to the Debian packagers
about that.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] OpeSSL - PostgreSQL

2017-11-09 Thread Tom Lane
John R Pierce  writes:
> On 11/9/2017 1:59 PM, chiru r wrote:
>> How to configure the PostgreSQL to allow specif cipher suites from 
>> different client applications?

> see https://www.postgresql.org/docs/9.5/static/ssl-tcp.html

Note that while you can adjust ssl_ciphers, it's a postmaster-wide
setting; there is not a provision for letting it be set per-user.
Since the SSL handshake necessarily occurs before we find out which
user is trying to connect, it'd be hard to do differently.

        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index not used when using expression

2017-11-09 Thread Tom Lane
Dingyuan Wang  writes:
> I have a table named "gps", with an indexed column "packettime", which
> has unix timestamps.

> The following query:

> select * from gps where packettime < extract(epoch from '2017-05-01
> 08:00+08'::timestamp with time zone)

> explains to:

> Seq Scan on gps  (cost=0.00..43411860.64 rows=384325803 width=120)
>   Filter: ((packettime)::double precision < date_part('epoch'::text,
> '2017-05-01 08:00:00+08'::timestamp with time zone))

The reason that's not working for you is that the query is not testing
packettime, it's testing packettime::float8, because date_part() returns
float8.  You could cast the result of date_part() to bigint, or whatever
type the packettime column actually is, so that the comparison is to
the unadorned variable.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Migrating money column from MS SQL Server to Postgres

2017-11-08 Thread Tom Lane
"Igal @ Lucee.org"  writes:
> I have a column named "discount" of type money in SQL Server.  I created 
> the table in Postgres with the same name and type, since Postgres has a 
> type named money, and am transferring the data by using PDI (Pentaho 
> Data Integration) Kettle/Spoon.

> Kettle throws an error though:  column "discount" is of type money but 
> expression is of type double precision.

> The value in the offending insert is:  0.0

> Why does Postgres decide that 0.0 is "double precision" (which is a 
> weird name in my opinion -- why can't it just be double) and not money?

Kettle must be telling it that --- on its own, PG would think '0.0'
is numeric, which it does have a cast to money for.

regression=# create table m (m1 money);
CREATE TABLE
regression=# insert into m values (0.0);
INSERT 0 1
regression=# insert into m values (0.0::numeric);
INSERT 0 1
regression=# insert into m values (0.0::float8);
ERROR:  column "m1" is of type money but expression is of type double precision
LINE 1: insert into m values (0.0::float8);
  ^
HINT:  You will need to rewrite or cast the expression.

You'll need to look at the client-side code to see where it's going wrong.

> The only solution I found is to set the column in Postgres to DOUBLE 
> PRECISION instead of MONEY, but I'm not sure if there are negative side 
> effects to that?

Well, it's imprecise.  Most people don't like that when it comes to
monetary amounts.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Block duplications in a shared buffers

2017-11-07 Thread Tom Lane
pinker  writes:
> I was analysing shared buffers content and noticed that exactly the same
> disk block appears there many times with different or the same usagecount.

Postgres would be completely broken if that were true, because
modifications made to one copy would fail to propagate to other copies.
I don't know where your data came from, but it can't be an accurate
representation of the instantaneous state of the buffer cache.

... actually, after looking at your query, I wonder whether the issue
is that you're failing to include database and tablespace in the
grouping key.  relfilenode isn't guaranteed unique across directories.
The fork number can matter, too.

        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] idle in transaction, why

2017-11-06 Thread Tom Lane
Rob Sargent  writes:
>   idle_in_transaction_session_timeout | 0   | default |
> || A value of 0 turns off the timeout. | user

Meh.  I think we're barking up the wrong tree anyway: so far as I can
find, there is no error message reading 'idle transaction timeout'
in the existing PG sources (and I sure hope no committer would have
thought that such an ambiguous message text was satisfactory).
So I think your error is coming from client-side or third-party code.
What other moving parts have you got in there?

    regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-06 Thread Tom Lane
David Pacheco  writes:
> I ran into what appears to be a deadlock in the logging subsystem.  It
> looks like what happened was that the syslogger process exited because it
> ran out of memory.  But before the postmaster got a chance to handle the
> SIGCLD to restart it, it handled a SIGUSR1 to start an autovacuum worker.
> That also failed, and the postmaster went to log a message about it, but
> it's blocked on the pipe that's normally connected to the syslogger,
> presumably because the pipe is full because the syslogger is gone and
> hasn't read from it.

Ugh.

> ... that process appears to have exited due to a fatal error
> (out of memory).  (I know it exited because the process still exists in the
> kernel -- it hasn't been reaped yet -- and I think it ran out of memory
> based on a log message I found from around the time when the process
> exited.)

Could we see the exact log message(s) involved?  It's pretty hard to
believe that the logger would have consumed much memory.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] idle in transaction, why

2017-11-06 Thread Tom Lane
"David G. Johnston"  writes:
> You should probably login as your application user and do "show
> idle_in_transaction_session_timeout" to see what a clean session has for a
> value and then figure out from there where that value is coming from.

You don't have to guess about the latter: the pg_settings view will tell
you exactly where the active value came from.  See the source, sourcefile,
sourceline columns.

    regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Naming conventions for column names

2017-11-06 Thread Tom Lane
Sachin Kotwal  writes:
> I can understand that it is important to maintain naming pattern same as
> system catalogs, but in that case we may need to redefine system catalogs
> naming conventions .

Those naming conventions are twenty-five years old, and there is an
astonishing amount of client code that would break if we ran around
changing existing system catalog column names.  It's very unlikely that
any proposal to do that would even receive serious consideration.

The bar to using new naming conventions in new catalogs would be
lower, of course, but then you have to think about the confusion
factor of having different naming styles in different places.

        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] xmin increasing within a transaction block?

2017-11-06 Thread Tom Lane
Alvaro Herrera  writes:
> Luca Ferrari wrote:
>> Why is xmin greater than the current transaction id (and most notably
>> not "fixed")?

> Something is using subtransactions there.  My first guess would be that
> there are triggers with EXCEPTION blocks, but your example doesn't show
> any.  Or maybe you have event triggers.

I can reproduce the example if I "\set ON_ERROR_ROLLBACK on" in psql.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Runtime analysis

2017-11-05 Thread Tom Lane
Neto pr  writes:
> I expected that the first run would always take longer than the others
> because of not having cached data, but look what happened:
>- in 6 cases the first execution was more faster than all executions.
>- in 2 cases only, the first exececution was more slower than all
>executions
> If anyone has any suspicion or explanation,  why in some cases the first
> execution can be faster than the others, please reply to this email.

Your Xeon is probably a variable-speed chip; did you take measures to
freeze the CPU frequency?  On my RHEL server, I generally can't get
very reproducible numbers from benchmarks unless I first do
"sudo cpupower frequency-set --governor performance"
because the default "ondemand" governor is too eager to ratchet down
the frequency.  Things might be different on Debian though.

In multi-socket servers, NUMA effects across sockets can be a big
headache too.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] EXPLAIN command just hangs...

2017-11-04 Thread Tom Lane
Rhhh Lin  writes:
> So I decided to intervene and terminate some active sessions (Which were 
> active and not waiting) which were all querying this table with a similar 
> query, by using pg_backend_terminate(pid). Some of these sessions were in an 
> active state for a few days executing this query. However, the termination of 
> the PIDs [The command succeeded and returned 't']has not closed down the 
> sessions. They are still present, and still active and still executing. Those 
> current sessions are for whatever reason completely "hung", locking out 
> exclusive access to the table (from vacuum) and cannot be killed. I have not 
> yet resorted to trying to kill the process PID's from the OS perspective. 
> Instead, I will try to schedule an outage over the next few days, restart the 
> server, implement a vacuum execution and see where I am at then.

Well, that's pretty interesting in itself.  Any chance of attaching to one
of those unkillable backends with gdb and getting a stack trace?

https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding 'serial' to existing column

2017-11-03 Thread Tom Lane
Robert Lakes  writes:
> I am new to Postgres and I am trying to build this SQL statement in my SQL
> script:
> ALTER TABLE listings_cdc ALTER COLUMN table_id SET DEFAULT
> nextval('tab_id_seq');

That looks correct, but this not so much:

> I am trying to build the above-stated command as a dynamic SQL statement:
>  EXECUTE 'ALTER TABLE listings_cdc ALTER COLUMN table_id SET DEFAULT
> nextval('||quote_ident(tab_id_seq)||')';

> I've received an error when running the script this above-stated command:
>  ERROR:  column "tab_id_seq" does not exist
> LINE 1: ...OLUMN table_id SET DEFAULT nextval('||quote_ident(tab_id_seq...

You want quote_literal, not quote_ident, because you're trying to produce
a single-quoted literal.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] EXPLAIN command just hangs...

2017-11-02 Thread Tom Lane
Rhhh Lin  writes:
> The EXPLAIN  statement is currently 'f' for waiting and 'active' via 
> pg_stat_activity, so it is doing something. The ps command does not show me 
> anything more verbose.

ps would confirm for sure whether it was eating CPU time, whereas I do not
particularly trust pg_stat_activity to tell you that.

> The reason I am very suspect of the timestamp column makeup is that if I
> remove that predicate from the EXPLAIN command and the actual query,
> both complete within seconds without issue.

We've seen issues with the planner having trouble trying to determine the
extreme values of an indexed column, in cases where there are a lot of
uncommitted or recently-dead entries at the end of the index --- it does
a lot of work trying to verify the commit status of each entry in turn.
So I wonder if that might apply.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Building tds_fdw Extension for Windows 64bit

2017-11-02 Thread Tom Lane
"Igal @ Lucee.org"  writes:
> So now I am getting the errors below.  Do I need to add an entry in 
> _deffile_ for each error message?  What does the entry look like?

These look like you're failing to link to whatever library supplies
dbbind() etc.  Can't help you more than that.

        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Building tds_fdw Extension for Windows 64bit

2017-11-02 Thread Tom Lane
"Igal @ Lucee.org"  writes:
> Any other ideas?

It looks to me like MSVC is complaining about the PGDLLEXPORT
markings that tds_fdw.c has on the function definitions
(not the extern declarations).  In the core code we only put
PGDLLEXPORT in extern declarations ... so try keeping it in
the externs and removing it from the .c file.

You may still get warnings that way, which I think you probably
have to live with, unless you want to make a .DEF file or use
--export-all-symbols as a substitute for PGDLLEXPORT'ing specific
functions.  For some sorry history around this see

https://www.postgresql.org/message-id/flat/A737B7A37273E048B164557ADEF4A58B539300BD%40ntex2010a.host.magwien.gv.at

        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Possible bug: could not open relation with OID [numbers] SQL State: XX000

2017-11-01 Thread Tom Lane
I wrote:
> You could have encountered it anytime since TOAST was invented, or at
> least since RETURN QUERY was invented (the latter is newer IIRC).
> The fact that the bug has been there so long and has only been reported
> a couple of times is the main reason why I'm loath to take a brute
> force duplicate-the-data approach to fixing it.  Such a fix would
> penalize many more people than it would help.

Just thinking idly about what a not-so-brute-force fix might look like
... I wonder if we could postpone the actual drop of toast tables to
end of transaction?  I'm not sure how messy that would be, or if it
would have negative consequences elsewhere.  But it might be an idea.

We already postpone removal of the underlying disk files till end
of transaction, since we don't know if a DROP TABLE will get rolled
back.  The idea here would be to postpone deletion of the system
catalog entries for the toast table as well.

I'm not likely to work on this idea myself in the near future,
but if anyone else is feeling motivated to attack the problem,
have at it ...

    regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Possible bug: could not open relation with OID [numbers] SQL State: XX000

2017-11-01 Thread Tom Lane
Adam Brusselback  writes:
> Huh, so in the other cases where the function works fine, it's likely that
> the data all just fits within the regular table and doesn't have to be
> TOAST'ed?

If that's the correct theory, yes.  Did you match up the OID yet?

> So this is something that isn't changed in PG10, and I could have
> encountered in 9.6, and just by chance didn't?

You could have encountered it anytime since TOAST was invented, or at
least since RETURN QUERY was invented (the latter is newer IIRC).
The fact that the bug has been there so long and has only been reported
a couple of times is the main reason why I'm loath to take a brute
force duplicate-the-data approach to fixing it.  Such a fix would
penalize many more people than it would help.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Possible bug: could not open relation with OID [numbers] SQL State: XX000

2017-11-01 Thread Tom Lane
Adam Brusselback  writes:
> The OID does not match any of the temp tables, so not sure what's up there.
> I have the function RETURN QUERY,
> and then I drop all my temp tables.

I'll bet the OID corresponds to the toast table for one of those temp
tables.  RETURN QUERY will stash away all the values read by the query,
but it doesn't make an attempt to inline out-of-line values; so you get
a failure when the out-of-line column value is eventually demanded.

I think we've seen one previous complaint of the same ilk.  Probably
somebody will get annoyed enough to fix it at some point, but the
sticking point is how to cover this corner case without causing a
performance drop for normal cases.  In the meantime, maybe you could
make the temp tables be ON COMMIT DROP instead of dropping them
explicitly mid-transaction.

    regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] the database system is shutting down - terminating walsender process due to replication timeout

2017-11-01 Thread Tom Lane
Zarko Aleksic  writes:
> I'm looking for a bit of help understanding a particular behavior we are 
> seeing with our PostgreSQL 9.6. After issuing a service shutdown command with 
> "systemctl stop" on RHEL 7 our PostgreSQL instance started behaving weirdly. 
> For the first time it wouldn't shutdown so easily / quickly.
> From the logs we could see that standby nodes that were trying to connect 
> were rejected due to database being shutdown. After wal_sender_timeout and 
> wal_receiver_timeout (default 60s) were reached the database finally shut 
> down. It seems that walsender process was preventing the shutdown of the 
> master database - until timeout was reached, a behavior we didn't experience 
> before.

9.6.what?

There were several possibly-relevant bug fixes in 9.6.3 and 9.6.4,
notably this one:

Author: Tom Lane 
Branch: master Release: REL_10_BR [fca85f8ef] 2017-06-30 12:00:15 -0400
Branch: REL9_6_STABLE Release: REL9_6_4 [e9d4aa594] 2017-06-30 12:00:03 -0400
Branch: REL9_5_STABLE Release: REL9_5_8 [446914f6b] 2017-06-30 12:00:03 -0400
Branch: REL9_4_STABLE Release: REL9_4_13 [5aa8db014] 2017-06-30 12:00:03 -0400

Fix walsender to exit promptly if client requests shutdown.

It's possible for WalSndWaitForWal to be asked to wait for WAL that doesn't
exist yet.  That's fine, in fact it's the normal situation if we're caught
up; but when the client requests shutdown we should not keep waiting.
The previous coding could wait indefinitely if the source server was idle.

In passing, improve the rather weak comments in this area, and slightly
rearrange some related code for better readability.

Back-patch to 9.4 where this code was introduced.

Discussion: https://postgr.es/m/14154.1498781...@sss.pgh.pa.us

I think that would only apply if the walsender's client had tried to
disconnect at the same time you were doing the master-server shutdown,
but maybe that's what happened.

There is still work going on around the walsender timeout, so maybe
what you hit is an as-yet-unresolved bug, but in any case you should
be keeping up with minor releases.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Make "(composite).function_name" syntax work without search_path changes?

2017-10-30 Thread Tom Lane
"David G. Johnston"  writes:
> The system knows that the datatype being inspected is "altschema.alttype" -
> would it be reasonable for the system to check for a function named "label"
> in the same schema as the target type, "altschema", with the target
> argument type and invoke it if present?

The rule is that (v).label is equivalent to label(v), therefore it will
only find function "label" if that's in your search path.  I am very
much not excited about randomly enlarging the search path depending on
syntax --- quite aside from the difficulty of documenting it clearly,
that seems like a great recipe for security hazards.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UPDATE syntax change

2017-10-30 Thread Tom Lane
Adam Brusselback  writes:
> --works
> UPDATE tst_table
> SET (b, c) = ('help me', 'please')
> WHERE a = 0;
> --does not work
> UPDATE tst_table
> SET (b) = ('help me')
> WHERE a = 0;

> So there was a change made, and you now cannot use the multi-column
> syntax if you're only updating a single column.  Was this intentional?

You still can, but you have to write ROW() explicitly.  This conforms
to the standard, which our old behavior didn't.

It was probably an oversight not to list this change as a compatibility
issue.  I'll go fix that ...

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] gin index trouble

2017-10-30 Thread Tom Lane
Rob Sargent  writes:
>> If you can make a test case that (eventually) hits that, we'd be
>> interested to see it ...

> Any hint(s) on what might trigger this sort of thing?  I could duplicate 
> the upload, but I doubt you want the 800K records, 200M input file even 
> if it did regenerate the problem.

It's possible you could duplicate the failure with synthetic data
generated by a not-very-long script.  That would beat uploading
a large data file, not to mention possibly needing to sanitize
your data.

        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] gin index trouble

2017-10-30 Thread Tom Lane
Rob Sargent  writes:
> I’ve hit this same message 
> Caused by: org.postgresql.util.PSQLException: ERROR: right sibling of GIN 
> page is of different type
> in a couple of contexts and I’m starting to get worried. 

If you can make a test case that (eventually) hits that, we'd be
interested to see it ...

> and I um, er, enabled gin on uuid by copying from a thread in this list, as 
> follows:
> create operator class _uuid_ops
> default for type _uuid
> using gin as
> operator 1 &&(anyarray, anyarray)
> ,operator 2 @>(anyarray, anyarray)
> ,operator 3 <@(anyarray, anyarray)
> ,operator 4 =(anyarray, anyarray)
> ,function 1 uuid_cmp(uuid, uuid)
> ,function 2 ginarrayextract(anyarray, internal, internal)
> ,function 3 ginqueryarrayextract(anyarray, internal, smallint, internal, 
> internal, internal, internal)
> ,function 4 ginarrayconsistent(internal, smallint, anyarray, integer, 
> internal, internal, internal, internal)
> ,storage uuid;

You should not have needed to do that, I think, as the standard
anyarray GIN opclass should've handled it.  Having said that,
I don't immediately see anything broken about this definition,
so it seems like it should've worked.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] starting PG command line options vs postgresql.con

2017-10-30 Thread Tom Lane
rakeshkumar464  writes:
> I am new to Docker env and I see that PG, as a container is started with
> [ lots of command-line parameters ]

> I would prefer using postgresql.conf.  what is the consensus in this forum
> regarding command line vs postgresql.conf.  Also if conflicting, which one
> takes priority.

The command line takes priority, IIRC, which means that nothing set on
the command line can be overridden without a restart.

I like to specify -p on the command line so that it's easy to tell which
postmaster is which in "ps" listings (of course, this only matters if
you're running multiple postmasters).  Otherwise it's better to leave
as much as you can to postgresql.conf.

    regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query not scaling

2017-10-26 Thread Tom Lane
Laurenz Albe  writes:
> Also, to have PostgreSQL inline the function, which would be good
> for performance, it should be declared IMMUTABLE.

Actually, if you hope to have a SQL function be inlined, it's better
not to decorate it at all --- not with IMMUTABLE, and not with STRICT
either.  Both of those restrict the parser's ability to inline unless
it can prove the contained expression is equally immutable/strict.
With the default attributes of volatile/not strict, there's nothing
to prove.

(In any case, it's usually easy enough to tell from EXPLAIN output
whether inlining has happened.)

        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do I insert and update into a table of arrays of composite types via a select command?

2017-10-25 Thread Tom Lane
"David G. Johnston"  writes:
> On Wed, Oct 25, 2017 at 2:16 PM, Celia McInnis 
>> Got it, finally...
>> insert into t_array select array[row((data_comp).*)::mytype[] from
>> t_composite;
>> 
>> I'm not sure why I need (data_comp).* rather than some of the other things
>> that I tried and failed with...

> ​The ​unusual set of parentheses are so the parser interprets data_comp is
> a column and not a table. Usually one write SELECT tbl.* FROM tbl so that
> is the assumed meaning of "name".*

If data_comp is a column of a composite type, you probably don't need all
that notation anyway --- seems like array[data_comp::my_type] or
array[data_comp]::my_type[] ought to work.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do I insert and update into a table of arrays of composite types via a select command?

2017-10-25 Thread Tom Lane
Celia McInnis  writes:
> My first question is: HOW do I populate this table, at first with arrays
> containing one element from the t_composite table?
> I have tried various things with no success.

You didn't say what you tried, but I imagine it was something like

regression=# insert into t_array select 
array[row('a','b','c'),row('d','e','f')];
ERROR:  column "data_array" is of type mytype[] but expression is of type 
record[]
LINE 1: insert into t_array select array[row('a','b','c'),row('d','e...
   ^
HINT:  You will need to rewrite or cast the expression.

Like it says, you need a cast.  You can either cast the array elements
individually:

regression=# insert into t_array select 
array[row('a','b','c')::mytype,row('d','e','f')::mytype];
INSERT 0 1

or just cast the whole ARRAY[] construct:

regression=# insert into t_array select 
array[row('a','b','c'),row('d','e','f')]::mytype[];
INSERT 0 1

although I think the latter only works in relatively late-model
Postgres, and it might make parsing a bit slower too.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Two versions of an extension in the same cluster?

2017-10-23 Thread Tom Lane
Paul Jungwirth  writes:
> I've got an extension that supplies functions written in C. Two 
> databases from the same cluster both use this extension. I understand 
> how I can load the example--2.0.0.sql file in one database, and 
> example--3.0.0.sql in another, but from what I can tell both databases 
> still share the same .so file. Is there any way to provide a separate 
> .so for each version?

Not very easily.  You can probably make it happen if you're stubborn
enough, but you'd have to do something like having the version update
script CREATE OR REPLACE every last C function in the extension to
change its shlib name from, eg, 'example-2' to 'example-3'.

Another thing that you should think long and hard on is that if you
go this route, there are likely to be scenarios where both libraries
are loaded into a backend process' address space.  Will they cope?
(For instance, if they both hook into some backend function hook,
will it be OK that hook actions get done twice?)

I think there are some other gotchas related to getting through a
pg_update scenario.  For instance, to get from 9.6 running example-2
to 10 running example-3, you'd need a version of example-2 built for
v10 (if you do the ALTER EXTENSION UPDATE after pg_upgrade) or a
version of example-3 built for 9.6 (if you do it in the other order).
So this definitely isn't going to reduce the number of builds you
have to maintain, rather the opposite.

There was a thread not too long ago concerning somebody who was putting
version numbers in his shlib name, and it was breaking things, and the
general recommendation was "so don't do that".  I don't recall the details
but you'd be well advised to check the archives before going that route.

> If not, what is the best approach for releasing a new .so that keeps the 
> old functionality? I guess something this?:

>  # example--2.0.0.sql
>  CREATE OR REPLACE FUNCTION
>  myfunc(anyarray)
>  RETURNS integer
>  AS 'example', 'myfunc_v2_0_0'
>  LANGUAGE c IMMUTABLE;

>  # example--3.0.0.sql
>  CREATE OR REPLACE FUNCTION
>  myfunc(anyarray)
>  RETURNS integer
>  AS 'example', 'myfunc_v3_0_0'
>  LANGUAGE c IMMUTABLE;

Yeah, there are several examples of that sort of thing in the contrib
modules.  Usually we only stick a version number into the C function
name when the function actually changes meaningfully; otherwise you'll
have a lot of makework boiler-plate in your version update scripts.
(Cases where you really need to do this should be the minority,
I'd think, otherwise you're talking about enough SQL behavioral change
that your users will probably be unhappy with you.)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing PostgreSQL 10 on Mac OSX Undefined Symbol _heap_modify_tuple_by_cols

2017-10-23 Thread Tom Lane
Ben Madin  writes:
> we are quite excited about the parallelisation enhancements, and keen to
> try, but trying to build (using the same configure as we have used for 9.6)
> is giving some warnings and errors.

Something's definitely messed up there:

> gcc -Wall -Wmissing-prototypes -Wpointer-arith
> -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
> -Wformat-security -fno-strict-aliasing -fwrapv
> -Wno-unused-command-line-argument -O2 -arch x86_64  -DREFINT_VERBOSE -I.
> -I./ -I/usr/local/pgsql965/include/server
> -I/usr/local/pgsql965/include/internal
> -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.13.sdk/usr/include/libxml2
> -I/usr/local/include  -c -o autoinc.o autoinc.c

Looking at this example of a v10 build log on macOS:
https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=longfin&dt=2017-10-23%2018%3A15%3A34&stg=make

the compile command for autoinc is

ccache gcc -Wall -Wmissing-prototypes -Wpointer-arith 
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute 
-Wformat-security -fno-strict-aliasing -fwrapv 
-Wno-unused-command-line-argument -g -O2 -fno-common 
-Wno-deprecated-declarations -Werror  -DREFINT_VERBOSE -I. -I. 
-I../../src/include  
-I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.13.sdk/usr/include/libxml2
  -I/usr/local/ssl/include  -c -o autoinc.o autoinc.c

Some of the discrepancies (e.g. -Werror on the buildfarm machine) are
explainable as different configuration choices, but the references to
/usr/local/pgsql965 in your build sure look like trouble.

> Is this looking for an existing environment variable (which
> seems unlikely for a build process) or is something else unusual?

I believe the configure script *does* pay attention to environment
variables, particularly CPPFLAGS and CFLAGS.  Most likely you had
version-specific values in those when you ran configure, and they
got absorbed into src/Makefile.global.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] parray_gin and \d errors in PG10

2017-10-22 Thread Tom Lane
I wrote:
> Or maybe what we should do is to avoid @> in favor of using
> ('d' = any(stxkind))

Pushed that way.

        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] parray_gin and \d errors in PG10

2017-10-22 Thread Tom Lane
Justin Pryzby  writes:
> On Sun, Oct 22, 2017 at 02:36:12PM -0400, Tom Lane wrote:
>> ...  Possibly we could use
>> (stxkind @> '{d}'::pg_catalog."char"[])
>> That works for me without parray_gin installed, but I wonder whether
>> it fails due to ambiguity if you do have parray_gin installed.

> [ yup ]

Bleah.

One option is to use OPERATOR(pg_catalog.@>), which aside from being
really ugly, isn't a complete fix because it still wouldn't work if
someone had decided to install parray_gin into the pg_catalog schema.

Or maybe what we should do is to avoid @> in favor of using

('d' = any(stxkind))

That's a bit less nice because it doesn't generalize as cheaply to
looking for multiple stxkind values, but since this query has no
current need for that, maybe it's fine.

Anyway, the fact that this is such a mess points up why trying to
alias polymorphic operators isn't such a hot idea.  parray_gin
really ought to get rid of that operator.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] parray_gin and \d errors in PG10

2017-10-22 Thread Tom Lane
Justin Pryzby  writes:
> After installing parray_gin extension and pg_upgrading another instance,
> \d is failing like so:

> [pryzbyj@database ~]$ psql ts -c '\d pg_class'
> ERROR:  operator is not unique: "char"[] @> unknown
> LINE 6:   (stxkind @> '{d}') AS ndist_enabled,

Ugh.

> Thankfully this is still working:
> ts=# \do @>
> ...
>  pg_catalog | @>   | anyarray  | anyarray   | boolean | contains
> ...
>  public | @>   | text[]| text[] | boolean | text 
> array contains compared by strict

I'm inclined to think it wasn't very bright of parray_gin to have
installed an operator that's confusable with the builtin anyarray @>
anyarray operator.  Still, we might as well try to work around that.

> This query works fine when adding cast to text[]:

No, that will fail entirely if you don't have parray_gin installed,
because stxkind is of type "char"[], and "char"[] @> text[] will not
match the anyarray operator.  Possibly we could use

(stxkind @> '{d}'::pg_catalog."char"[])

That works for me without parray_gin installed, but I wonder whether
it fails due to ambiguity if you do have parray_gin installed.  In
principle this'd still match the text[] @> text[] operator, and I'm
not sure whether we have an ambiguity resolution rule that would
prefer one over the other.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Weird performance difference

2017-10-20 Thread Tom Lane
Israel Brewster  writes:
>> Can you send "explain" (not explain analyze) for the production server?

> Not for the full query - it only just completed, after 70 minutes or so, and 
> I wasn't running under EXPLAIN ANALYZE. Running with a shorter date range of 
> only 7 days, as you suggest below: https://explain.depesz.com/s/r80j 
> <https://explain.depesz.com/s/r80j>

First thing that jumps out from that is

> Foreign Scan on oag_schedules (cost=100.00..128.60 rows=620 width=108) 
> (actual time=3.576..477.524 rows=79,853 loops=1)

Being off by a factor of 100 at the scan level is never a good start for a
join plan.  Turn on use_remote_estimate (assuming these are postgres_fdw
tables).  Also try explicitly ANALYZE'ing the foreign tables.  I do not
believe auto-analyze will touch foreign tables ...

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to find out extension directory

2017-10-20 Thread Tom Lane
"David G. Johnston"  writes:
> On Fri, Oct 20, 2017 at 1:12 PM, rakeshkumar464 
> wrote:
>> How do I know beforehand where the dir path is ?

> I think pg_config (
> https://www.postgresql.org/docs/current/static/app-pgconfig.html ) is what
> you are looking for.

specifically, you can assume it's the "extension" subdirectory of
whatever SHAREDIR is.  "pg_config --sharedir" will tell you that.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do you decide what aggregates to add?

2017-10-20 Thread Tom Lane
Seamus Abshere  writes:
> Who decides if a seemingly-useful aggregate is added to Postgres?

There's no particularly well-defined process for it, but as far as
these items ago:

> 1. I just discovered first()/last() as defined in the wiki [1], where
> it's noted that conversion from Access or Oracle is much easier with
> them.

Those are (a) undefined as to precise behavior, (b) redundant with
the first_value/last_value window functions, which *are* well defined
(not to mention SQL-standard), and (c) easy enough to make in SQL if
you want them despite (a) and (b).  So I don't really see them
getting over the hump.

> 2. We use our "homemade" jsonb_object_agg(jsonb) constantly, which is
> modeled off of (built-in) json_object_agg(name, value) and (built-in)
> jsonb_agg(expression). [2]

I dunno, the semantics of using jsonb_concat as an aggregate transfn
seem pretty squishy to me.  It's certainly much less well-defined
as to what you get than for either of the existing aggs you mention.

The short answer really is that we spend a lot of sweat on making Postgres
extensible so that we don't have to put in (and then maintain forever)
every little special-purpose function somebody might want.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using Variables in Queries

2017-10-19 Thread Tom Lane
"David G. Johnston"  writes:
> On Thu, Oct 19, 2017 at 12:14 PM, Tom Lane  wrote:
>> FROM  products,
>> (values ('red widget'::text)) consts(target)
>> WHERE similarity(target, item_name) > 0.25
>> ORDER BY target <<-> item_name
>> 
>> PG 9.5 and up will flatten out cases like this to be exactly what you
>> wrote out longhand.

> ​Does it matter if the values expression is embedded in a CTE?

Yes, CTEs are optimization fences ...

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using Variables in Queries

2017-10-19 Thread Tom Lane
"Igal @ Lucee.org"  writes:
> On 10/19/2017 8:44 AM, David G. Johnston wrote:
>> Adding lots of new custom syntax to pure server-side parsed SQL is a 
>> non-trivial undertaking whose need is reduced by the alternatives so 
>> described (functions, DO block, PREPARE, psql).

> I still think that using server side variable is a much easier and 
> intuitive way of doing this.  All of the alternatives have major flaws.

Server-side variables aren't exactly flaw-free either.  We looked into
that before and couldn't really come to agreement on a good definition.
You might want to check the archives.  However, I'd just point out that
variables, per se, aren't that useful unless you also have control flow
constructs, and at that point you're way outside SQL.

> My real query is for similarity here, so I'm testing different functions 
> with the same value, e.g.

> SELECT item_name
>  , similarity('red widget', item_name)
>  , similarity(item_name, 'red widget')
>  , word_similarity('red widget', item_name)
>  , word_similarity(item_name, 'red widget')
>  , item_name <->> 'red widget'
>  , item_name <<-> 'red widget'
>  , 'red widget' <<-> item_name
> FROM  products
> WHERE similarity('red widget', item_name) > 0.25
> ORDER BY 'red widget' <<-> item_name

> So each time I want to change the phrase it's a slower process than what 
> I'm used to (think multiple-cursor in modern text editors, or a 
> server-side variable)

Well, this is simply not exploiting SQL very well.  You could use a
VALUES subquery to provide the string you're using elsewhere in the query.

SELECT item_name
 , similarity(target, item_name)
 , similarity(item_name, target)
 , word_similarity(target, item_name)
 , word_similarity(item_name, target)
 , item_name <->> target
 , item_name <<-> target
 , target <<-> item_name
FROM  products,
  (values ('red widget'::text)) consts(target)
WHERE similarity(target, item_name) > 0.25
ORDER BY target <<-> item_name

PG 9.5 and up will flatten out cases like this to be exactly what you
wrote out longhand.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgpass file type restrictions

2017-10-19 Thread Tom Lane
"Daniel Verite"  writes:
>   Desidero wrote:
>> When attempting to use something like an anonymous pipe for a
>> passfile, psql throws an error stating that it only accepts plain files

> So the script doing that has access to the password(s) in clear text.
> Can't it instead push the password into the PGPASSWORD
> environment variable, avoiding creating .pgpass in any form?

On many platforms, it's possible for other users to see the environment
variables of a process.  So PGPASSWORD is really quite insecure.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgpass file type restrictions

2017-10-18 Thread Tom Lane
Desidero  writes:
> I’m running into problems with the restriction on pgpass file types. When
> attempting to use something like an anonymous pipe for a passfile, psql
> throws an error stating that it only accepts plain files.
> ...
> Does anyone know why it’s set up to avoid using things like anonymous pipes
> (or anything but "plain files")?

A bit of digging in the git history says that the check was added here:

commit 453d74b99c9ba6e5e75d214b0d7bec13553ded89
Author: Bruce Momjian 
Date:   Fri Jun 10 03:02:30 2005 +

Add the "PGPASSFILE" environment variable to specify to the password
file.

Andrew Dunstan

and poking around in the mailing list archives from that time finds
what seems to be the originating thread:

https://www.postgresql.org/message-id/flat/4123BF8C.5000909%40pse-consulting.de

There's no real discussion there of the check for plain-file-ness.
My first guess would have been that the idea was to guard against
symlink attacks; but then surely the stat call needed to have been
changed to lstat?  So I'm not quite sure of the reasoning.  Perhaps
Andrew remembers.

> If it matters,
> I'm trying to use that so I can pass a decrypted pgpassfile into postgres
> since my company is not allowed to have unencrypted credentials on disk
> (yes, I know that it's kind of silly to add one layer of abstraction, but
> it's an industry rule we can't avoid).

I cannot get excited about that proposed use-case, though.  How is a pipe
any more secure than a plain file with the same permissions?

My thought is that you shouldn't be depending on passwords at all, but
on SSL credentials or Kerberos auth, both of which libpq supports fine.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Finally upgrading to 9.6!

2017-10-17 Thread Tom Lane
Ron Johnson  writes:
> Where can I look to see (roughly) how much more RAM/CPU/disk needed when 
> moving from 8.4 and 9.2?

It's entirely possible you'll need *less*, as you'll be absorbing the
benefit of several years' worth of performance improvements.  But this
is such a workload-dependent thing that there's no general answer.

        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Force SSL connection

2017-10-16 Thread Tom Lane
rakeshkumar464  writes:
> In PG 9.6 or PG 10, is there a way to force only SSL based connections coming
> from pgadmin or dbeaver.

I think you could set that up with a custom pg_hba.conf entry or two.
Something like

hostnossl ... pgadmin ... reject

before the line that would normally accept the connection.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Non-overlapping updates blocking each other

2017-10-16 Thread Tom Lane
Seamus Abshere  writes:
> I hesitate to share my query and indexes because it makes this question
> seem more esoteric than I think it really is... but here we go.
> * Version 9.6.3.
> * I don't have any foreign key constraints.
> * I don't use serializable.
> * My update query is very careful to stay in an id range. [1]
> * I do have some exotic indexes [2]. gist, gin, postgis, fillfactor...

I'd bet on the last one, especially since you found that the problem
was a page-level lock.  Did you look to see which relation the page
lock was in?

    regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Non-overlapping updates blocking each other

2017-10-14 Thread Tom Lane
Melvin Davidson  writes:
> On Sat, Oct 14, 2017 at 12:01 PM, Thomas Kellerer 
> wrote:
>> Since when does Postgres lock the whole table during an update?

> When the optimizer thinks it needs to do a TABLE SCAN!

Sorry, but that's nonsense.

More likely explanations for the OP's problem involve foreign key
constraints that cause two different row updates to need to lock
the same referenced row, or maybe he's using some index type that
has greater locking demands than a btree, or he's using serializable
mode and fell foul of one of its approximations as to which rows
the update depends on, or perhaps some other corner case.  We'd need
more info about the schema and the Postgres version to tell for sure.

    regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does increasing the precision of a numeric column rewrites the table?

2017-10-11 Thread Tom Lane
Thomas Kellerer  writes:
> I don't understand why going from numeric(12,2) to numeric(15,3) would 
> require a table rewrite. 

The comment for numeric_transform explains this:

 * Flatten calls to numeric's length coercion function that solely represent
 * increases in allowable precision.  Scale changes mutate every datum, so
 * they are unoptimizable.  Some values, e.g. 1E-1001, can only fit into an
 * unconstrained numeric, so a change from an unconstrained numeric to any
 * constrained numeric is also unoptimizable.

The issue is basically that changing '1.00' to '1.000' requires a change
in the actually-stored value.

    regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] startup process stuck in recovery

2017-10-11 Thread Tom Lane
Simon Riggs  writes:
> On 11 October 2017 at 08:09, Christophe Pettus  wrote:
>> While it's certainly true that this was an extreme case, it was a real-life 
>> production situation.  The concern here is that in the actual production 
>> situation, the only symptom was that the startup process just stopped.  
>> There were no log messages or any other indication of what was going wrong.

> Which indicates it was making progress, just slowly.

> Tom says "This is pretty easy to diagnose though
> because it spews "out of shared memory" WARNING messages to the
> postmaster log at an astonishing rate"

> These don't seem to match.

Yeah.  I'm still suspicious that Christophe saw some other misbehavior
than the one I found.  We know his server was dealing with < 10K locks,
which doesn't seem like enough to cause any obvious problem from a mere
O(N^2) behavior.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] startup process stuck in recovery

2017-10-10 Thread Tom Lane
Christophe Pettus  writes:
> I was able to reproduce this on 9.5.9 with the following:

Hmm ... so I still can't reproduce the specific symptoms Christophe
reports.

What I see is that, given this particular test case, the backend
process on the master never holds more than a few locks at a time.
Each time we abort a subtransaction, the AE lock it was holding
on the temp table it created gets dropped.  However ... on the
standby server, pre v10, the replay process attempts to take all
12000 of those AE locks at once.  This is not a great plan.

On 9.5, for me, as soon as we're out of shared memory
ResolveRecoveryConflictWithLock will go into an infinite loop.
And AFAICS it *is* infinite; it doesn't look to me like it's
making any progress.  This is pretty easy to diagnose though
because it spews "out of shared memory" WARNING messages to the
postmaster log at an astonishing rate.

9.6 hits the OOM condition as well, but manages to get out of it
somehow.  I'm not very clear how, and the log trace doesn't look
like it's real clean: after a bunch of these

WARNING:  out of shared memory
CONTEXT:  xlog redo at 0/C1098AC0 for Standby/LOCK: xid 134024 db 423347 rel 
524106 
WARNING:  out of shared memory
CONTEXT:  xlog redo at 0/C10A97E0 for Standby/LOCK: xid 134024 db 423347 rel 
524151 
WARNING:  out of shared memory
CONTEXT:  xlog redo at 0/C10B36B0 for Standby/LOCK: xid 134024 db 423347 rel 
524181 
WARNING:  out of shared memory
CONTEXT:  xlog redo at 0/C10BD780 for Standby/LOCK: xid 134024 db 423347 rel 
524211 

you get a bunch of these

WARNING:  you don't own a lock of type AccessExclusiveLock
CONTEXT:  xlog redo at 0/C13A79B0 for Transaction/COMMIT: 2017-10-10 
15:05:56.615721-04
LOG:  RecoveryLockList contains entry for lock no longer recorded by lock 
manager: xid 134024 database 423347 relation 526185
CONTEXT:  xlog redo at 0/C13A79B0 for Transaction/COMMIT: 2017-10-10 
15:05:56.615721-04
WARNING:  you don't own a lock of type AccessExclusiveLock
CONTEXT:  xlog redo at 0/C13A79B0 for Transaction/COMMIT: 2017-10-10 
15:05:56.615721-04
LOG:  RecoveryLockList contains entry for lock no longer recorded by lock 
manager: xid 134024 database 423347 relation 526188
CONTEXT:  xlog redo at 0/C13A79B0 for Transaction/COMMIT: 2017-10-10 
15:05:56.615721-04
WARNING:  you don't own a lock of type AccessExclusiveLock
CONTEXT:  xlog redo at 0/C13A79B0 for Transaction/COMMIT: 2017-10-10 
15:05:56.615721-04
LOG:  RecoveryLockList contains entry for lock no longer recorded by lock 
manager: xid 134024 database 423347 relation 526191
CONTEXT:  xlog redo at 0/C13A79B0 for Transaction/COMMIT: 2017-10-10 
15:05:56.615721-04

The important point though is that "a bunch" is a finite number,
whereas 9.5 seems to be just stuck.  I'm not sure how Christophe's
server managed to continue to make progress.

It looks like the 9.6-era patch 37c54863c must have been responsible
for that behavioral change.  There's no indication in the commit message
or the comments that anyone had specifically considered the OOM
scenario, so I think it's just accidental that it's better.

v10 and HEAD avoid the problem because the standby server doesn't
take locks (any at all, AFAICS).  I suppose this must be a
consequence of commit 9b013dc238c, though I'm not sure exactly how.

Anyway, it's pretty scary that it's so easy to run the replay process
out of shared memory pre-v10.  I wonder if we should consider
backpatching that fix.  Any situation where the replay process takes
more locks concurrently than were ever held on the master is surely
very bad news.

A marginally lesser concern is that the replay process does need to have
robust behavior in the face of locktable OOM.  AFAICS whatever it is doing
now is just accidental, and I'm not sure it's correct.  "Doesn't get into
an infinite loop" is not a sufficiently high bar.

And I'm still wondering exactly what Christophe actually saw ...

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] startup process stuck in recovery

2017-10-10 Thread Tom Lane
Alvaro Herrera  writes:
> Tom Lane wrote:
>> Hmm, I tried to reproduce this and could not.  I experimented with
>> various permutations of this:

> This problem is probably related to commit 9b013dc238c, which AFAICS is
> only in pg10, not 9.5.

You're right, I was testing on HEAD, so that patch might've obscured
the problem.  But the code looks like it could still be O(N^2) in
some cases.  Will look again later.

        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] startup process stuck in recovery

2017-10-09 Thread Tom Lane
Peter Geoghegan  writes:
> Just a guess, but do you disable autovacuum on your dev machine? (I know I 
> do.)

Nope.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] startup process stuck in recovery

2017-10-09 Thread Tom Lane
Christophe Pettus  writes:
> The problem indeed appear to be a very large number of subtransactions, each 
> one creating a temp table, inside a single transaction.  It's made worse by 
> one of those transactions finally getting replayed on the secondary, only to 
> have another one come in right behind it...

Hmm, I tried to reproduce this and could not.  I experimented with
various permutations of this:

create or replace function goslow(n int) returns void language plpgsql as
$$
begin
for i in 1..n loop
  begin
execute 'create temp table myfoo' || i || '(f1 int)';
perform i/0;
  exception when division_by_zero then null;
  end;
end loop;
perform n/0;
end;
$$;

and did not see any untoward behavior, at least not till I got to enough
temp tables to overrun the master's shared lock table, and even then it
cleaned up fine.  At no point was the standby process consuming anywhere
near as much CPU as the master's backend.

What am I missing to reproduce the problem?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Equivalence Classes when using IN

2017-10-09 Thread Tom Lane
David Rowley  writes:
> If the only reason that is_simple_subquery() rejects subqueries with
> ORDER BY is due to wanting to keep the order by of a view, then
> couldn't we make is_simple_subquery() a bit smarter and have it check
> if the subquery is going to be joined to something else, which likely
> would destroy the order, or at least it would remove any guarantees of
> it.

I'm not on board with this.  The assumption is that if the user put an
ORDER BY there, that means they want that subquery to be computed in that
order.  It's not for us to decide they didn't mean what they said.

Moreover, there are cases where the ORDER BY would be semantically
significant, eg if there's a LIMIT or volatile functions or tSRFs
involved.

BTW, I noticed that I was wrong upthread about ORDER BY in subqueries
being disallowed by spec --- that was true originally, but they allow
it as of SQL:2008 or thereabouts.  It might be interesting to see if
the spec says anything concrete about the semantics of that.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] startup process stuck in recovery

2017-10-09 Thread Tom Lane
Christophe Pettus  writes:
>> On Oct 9, 2017, at 13:26, Tom Lane  wrote:
>> My bet is that the source server did something that's provoking O(N^2)
>> behavior in the standby server's lock management.  It's hard to say
>> exactly what, but I'm wondering about something like a plpgsql function
>> taking an AccessExclusiveLock inside a loop that repeatedly traps an
>> exception.  Can you correlate where the standby is stuck with what
>> was happening on the source?

> Interestingly, the OIDs for the relations on which the locks on the secondary 
> are held aren't present in pg_class, and they're clustered together.  Could a 
> large number of temporary table creations that are being undone by an abort 
> cause this?

Hmm.  Creating or dropping a temp table does take AccessExclusiveLock,
just as it does for a non-temp table.  In principle we'd not have to
transmit those locks to standbys, but I doubt that the WAL code has
enough knowledge to filter them out.  So a lot of temp tables and
a lot of separate subtransactions could be a nasty combination.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] startup process stuck in recovery

2017-10-09 Thread Tom Lane
Christophe Pettus  writes:
>> On Oct 9, 2017, at 13:01, Tom Lane  wrote:
>> Is that number changing at all?

> Increasing:
>  AccessExclusiveLock |  8810

Oh, that's really interesting.  So it's not *just* releasing locks but
also acquiring them, which says that it is making progress of some sort.

My bet is that the source server did something that's provoking O(N^2)
behavior in the standby server's lock management.  It's hard to say
exactly what, but I'm wondering about something like a plpgsql function
taking an AccessExclusiveLock inside a loop that repeatedly traps an
exception.  Can you correlate where the standby is stuck with what
was happening on the source?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] startup process stuck in recovery

2017-10-09 Thread Tom Lane
Christophe Pettus  writes:
> The other observation is that the startup process is holding a *lot* of locks:

Hmm.  Is it possible that the process is replaying the abort of a
transaction with a lot of subtransactions?  It seems like maybe
you could be getting into an O(N^2) situation, though for N in
the low thousands it's hard to see that loop taking so long that
you'd think things were stuck.

> # select mode, count(*) from pg_locks where pid=5882 group by mode;
>  AccessExclusiveLock |  7133

Is that number changing at all?

        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] startup process stuck in recovery

2017-10-09 Thread Tom Lane
Christophe Pettus  writes:
> We're dealing with a 9.5.5 database with the symptom that, after a certain 
> amount of time after restart, the startup process reaches a certain WAL 
> segment, and stops.  The startup process runs at 100% CPU, with no output 
> from strace.  There are no queries running on the secondary, so it's not a 
> delay for queries.  The WAL receiver continues to operate normally, and we 
> can see sent_location / write_location / flush_location continue to move 
> ahead in parallel, with replay_location stuck in that WAL segment.

> Suggestions on further diagnosis?

Attach to startup process with gdb, and get a stack trace?

    regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Equivalence Classes when using IN

2017-10-09 Thread Tom Lane
David Rowley  writes:
> It's not all that clear what your view is doing here. Confusingly
> there's a Sort in the plan, yet nothing in the query asked for that,
> so I guess that the view must have an ORDER BY. If you get rid of that
> the planner would likely use an index on product (customer_id) to
> parameterise the nested loop, at least, it likely would, if you have
> one.

Yeah.  The ORDER BY creates a partial optimization fence, preventing
any such plan from being considered.

> It's pretty bad practice to have ORDER BY in views. I kinda wish we
> didn't even allow it, but that ship sailed many years ago...

I think it's actually disallowed by the SQL spec (although so are
many other things we support).  IMO it's a useful facility to have
for views that are meant for direct presentation to clients ---
but if you'd like joins to the view to be optimized, you don't
want an ORDER BY in there.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Equivalence Classes when using IN

2017-10-08 Thread Tom Lane
David Rowley  writes:
> There are more details of the discussion in [1], although there's
> probably lots more threads to be found if you search the archives.
> [1] 
> https://www.postgresql.org/message-id/flat/CAKJS1f9FK_X_5HKcPcSeimy16Owe3EmPmmGsGWLcKkj_rW9s6A%40mail.gmail.com#cakjs1f9fk_x_5hkcpcseimy16owe3empmmgsgwlckkj_rw9...@mail.gmail.com

That thread seems to be about transitively applying inequalities
(ie, given "a = b and a < c", deduce "b < c"), which seems like a bit
of a different animal than IN.  Some of the issues are similar
perhaps, but I'd think that being able to deduce "b IN somelist"
from "a = b and a IN somelist" is more valuable, simply because the
IN would typically be a much stronger constraint than an inequality.
So that idea suggests that it's more worth expending planner cycles
to chase the possibility.

I do vaguely recall discussions specifically around IN, though
I didn't have any luck finding one in the archives.  There's also
the recent thread
https://www.postgresql.org/message-id/flat/camjna7cc4x9yr-vajs-jsycajhrdvjqnn7m2slh1wlh-_z2...@mail.gmail.com
which suggests being able to simplify "a IN somelist AND
a IN someotherlist".  If we wanted to do that, making the
"lists" be treatable as eclass members would be a good
place to start, because that would naturally result in
intersect-able lists ending up in the same eclass.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Functions and Parentheses

2017-10-06 Thread Tom Lane
"Igal @ Lucee.org"  writes:
> How come `current_date` has no parenthesis but `clock_timestamp()` does?

Because the SQL standard says that CURRENT_DATE doesn't have parentheses.
It is a function by any other measure, though.

(AFAICT, the SQL committee is unacquainted with any principles of
programming language syntax design that emerged later than the COBOL
era.  Their capacity to invent new and non-orthogonal syntax for every
new feature seems boundless.)

        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] compiling postgres_fdw fails for 9.6.5

2017-10-06 Thread Tom Lane
Sandeep Gupta  writes:
> I downloaded the source, configured it with prefix=install_dir and
> then moved to contrib/postgres_fdw dir.
> However make fails with following error:
> ../../src/include/utils/elog.h:71:28: fatal error: utils/errcodes.h:
> No such file or directory

Try building the core code first.

        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Plan changes from index scan to seq scan after 5 executions

2017-09-30 Thread Tom Lane
Alexander Kukushkin  writes:
> Original problem has been found on 9.4.9, but I was able to reproduce it on
> 10rc1.

The problem here is that the generic estimate of the selectivity of
"where value = $1" is 1.0, because var_eq_non_const makes the not
unreasonable assumption that the unknown value being searched for is
one of the values appearing in the table, and there is only one.
This makes the estimated cost of the seqscan+limit plan nearly nil,
since it's expected to return the first row it comes to.  So that
plan gets chosen if we're considering a generic plan that doesn't
know the specific value of $1.  And at that point the plancache
says to itself "I'm not getting any benefit in estimated cost for
the custom plans I've been making, so I'll stop doing that".

This is not an easy thing to improve without making other cases
worse :-(.  One heuristic that I've been wondering about is whether
we could say "if the generic plan appears cheaper than any custom
plan we've made so far, disbelieve it, because probably it's based
on wrong estimates".  But I'm not sure if that would have its own
failure modes.  Anyway nobody's tried it yet.

You can find more discussion of this problem in the -hackers archives.

As for workarounds, the only short-term fix I can suggest is to use
EXECUTE for this query in your function, thus preventing caching of
a plan for it.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG 10 and perl

2017-09-29 Thread Tom Lane
Andy Colson  writes:
> I started playing with 10, and cannot get it to use the right version of 
> perl.  I'll only use the system version:
> root@firefly:/usr/local/pg10/lib/postgresql# ldd plperl.so
>   libperl.so => /usr/lib64/perl5/CORE/libperl.so (0x7fc9c67d4000)

Hm, I don't think we've changed anything about that in v10.

> But, here's the thing.  The config.log reports 5.26 version:

That's good, but what's probably missing is an "rpath" specification when
linking plperl.so.  Did you build with --disable-rpath, perhaps?  That's
the usual recommendation when building for Linux, but if you want to use
any non-default libraries, you can't do it.

Go into src/pl/plperl, remove and remake plperl.so, and see whether the
link command includes anything like

-Wl,-rpath,'/usr/lib64/perl5/CORE',--enable-new-dtags

(That's what I see when building with a stock Linux Perl configuration and
rpath enabled.)  If there's no such switch, or if it doesn't point to
where the libperl.so that you want to use is, then there's your problem.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] many many open files on pgsql_tmp with size 0

2017-09-28 Thread Tom Lane
francis cherat  writes:
> when i execute a huge sql query on my database (version 9.3), i see many many 
> open files (36) on pgsql_tmp with size 0.
> This files are deleted where the query is done.

Hmm.  Can we see the query that causes this, and maybe EXPLAIN ANALYZE
output for it?  Also, 9.3.what exactly?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Catching errors inside a LOOP is causing performance issues

2017-09-27 Thread Tom Lane
"David G. Johnston"  writes:
> ​Not sure how much detail you are looking for but the docs say this:
> "​Tip: A block containing an EXCEPTION clause is significantly more
> expensive to enter and exit than a block without one. Therefore, don't use
> EXCEPTION without need."
> https://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING

> I'm somewhat doubting "plan caching" has anything to do with this; I
> suspect its basically that there is high memory and runtime overhead to
> deal with the possibilities of needing to convert a exception into a branch
> instead of allowing it to be fatal.

Yeah, it's about the overhead of setting up and ending a subtransaction.
That's a fairly expensive mechanism, but we don't have anything cheaper
that is able to recover from arbitrary errors.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is float8 a reference type?

2017-09-23 Thread Tom Lane
Paul A Jungwirth  writes:
> Since I'm expecting ~10 million elements per array, it seems like
> skipping the conversion will have a real effect. I checked the
> assembly and do see a difference (on both Mac+clang and Linux+gcc).

I wonder whether you're using up-to-date Postgres headers (ones
where Float8GetDatum is a static inline function).  For me, both
of those platforms recognize it as a no-op --- in fact, clang
turns a loop like

for (i = 0; i < n; i++) {
datums[i] = Float8GetDatum(floats[i]);
}

into something that looks suspiciously like an inlined, loop-unrolled
memcpy().

        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding Cyrillic support

2017-09-23 Thread Tom Lane
John R Pierce  writes:
> On 9/23/2017 1:44 AM, Job wrote:
>> how can i add more character support to PostgreSql 9.6?
>> I need to also store some information in Cyrillic, for Russian users.

> utf-8 should be able to store just about any character.

Or there's about four different Cyrillic-specific character sets
available already:

https://www.postgresql.org/docs/current/static/multibyte.html#CHARSET-TABLE

        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why can't the database owner create schemas and how can I enable that?

2017-09-23 Thread Tom Lane
John R Pierce  writes:
> On 9/22/2017 10:29 PM, Tim Uckun wrote:
>> The app can now connect to the database but it can't create any 
>> tables, schemas or anything else unless I give it superuser privileges.

> that should have worked just fine.

Yeah.  Note that creating new schemas is not directly connected to
ownership anyway --- it's a question of whether you have the CREATE
privilege on the database.  The owner should have that privilege
by default, but it could be revoked, or granted to others.

        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is float8 a reference type?

2017-09-22 Thread Tom Lane
Paul A Jungwirth  writes:
> On Fri, Sep 22, 2017 at 8:05 PM, Pavel Stehule  
> wrote:
>> I don't think so it is good idea to write 64bit only extensions.

> I agree, but how about this?:

"Premature optimization is the root of all evil".  Do you have good reason
to think that it's worth your time to write unsafe/unportable code?  Do
you know that your compiler doesn't turn Float8GetDatum into a no-op
already?  (Mine does, on a 64-bit machine.)

    regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dynamic use of RAISE with USING to generate and catch non-hardcoded custom exceptions

2017-09-21 Thread Tom Lane
mike davis  writes:
> I'm trying to get dynamic version of the RAISE command working so
> that I can use a table of custom application error messages and codes
> for use by all developed plpgsql functions.

This works for me:

DO $$
DECLARE
  v_msg TEXT := 'SOMETHING IS WRONG';
  v_sqlstate TEXT := 'E0001';
BEGIN
  RAISE EXCEPTION USING message = v_msg, errcode = v_sqlstate;
EXCEPTION
  WHEN SQLSTATE 'E0001' THEN
 RAISE NOTICE '%','Error E0001 raised - going to do something about it';
  WHEN OTHERS THEN
 RAISE NOTICE 'OTHER ERRORS: %,%', sqlstate,sqlerrm;
END$$;

NOTICE:  Error E0001 raised - going to do something about it

Or you could do

  RAISE EXCEPTION SQLSTATE v_sqlstate USING message = v_msg;

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance appending to an array column

2017-09-21 Thread Tom Lane
Paul A Jungwirth  writes:
> I'm considering a table structure where I'd be continuously appending
> to long arrays of floats (10 million elements or more). Keeping the
> data in arrays gives me much faster SELECT performance vs keeping it
> in millions of rows.

> But since these arrays keep growing, I'm wondering about the UPDATE
> performance.

It's going to suck big-time :-(.  You'd be constantly replacing all
of a multi-megabyte toasted field.  Even if the UPDATE speed per se
seemed tolerable, this would be pretty nasty in terms of the
vacuuming overhead and/or bloat it would impose.

My very first use of Postgres, twenty years ago, involved time series
data which perhaps is much like what you're doing.  We ended up keeping
the time series data outside the DB; I doubt the conclusion would be
different today.  I seem to recall having heard about a commercial fork
of PG that is less bad for this type of data, but the community code
is not the weapon you want.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] hard parse?

2017-09-21 Thread Tom Lane
"David G. Johnston"  writes:
> On Thu, Sep 21, 2017 at 5:48 AM, Peter Koukoulis 
> wrote:
>> I have a query where a filter would always be negative, how many steps,
>> out these:
>> 
>> - parsing and syntax check
>> - semantic analysis
>> - transformation process (query rewrite based on system or
>> user-defined rules)
>> - query optimization
>> - execution
>> 
>> would be performed or not? Also, where in the documentation can I found
>> out which of the above phases would be performed?

> All of them.

Yeah.  The question is more usefully formulated as "how much will the
query optimizer collapse a query with a constant-false condition"?
You can answer that with EXPLAIN, eg.

regression=# create table test1 (x int, y int);
CREATE TABLE
regression=# explain select x,y from test1 where 1=0;
QUERY PLAN
--
 Result  (cost=0.00..0.00 rows=0 width=8)
   One-Time Filter: false
(2 rows)

In this case the answer is "pretty far" --- you get a valid but
dummy plan, which will just exit without returning any rows.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Puzzled by UNION with unknown types

2017-09-20 Thread Tom Lane
Ken Tanzer  writes:
> Hi.  I've solved my practical problems, and I know unknown types are
> just bad, but am still curious about why some of these cause errors,
> and others don't.

> ag_consulting=> SELECT * FROM (SELECT '1'::unknown) foo UNION ALL
> SELECT * FROM (SELECT '1'::unknown) bar;
> ERROR:  failed to find conversion function from unknown to text

FWIW, this succeeds in 9.5 and up, as a result of more aggressive
application of the rule "coerce an unknown value to text if we cannot
find any other interpretation".

> ag_consulting=> CREATE TEMP TABLE t1 AS SELECT '1'::unknown; \d t1
> WARNING:  column "unknown" has type "unknown"
> DETAIL:  Proceeding with relation creation anyway.
> SELECT 1
>  Table "pg_temp_2.t1"
>  Column  |  Type   | Modifiers
> -+-+---
>  unknown | unknown |

As of v10, this will produce a table with a column of type text,
not type unknown, again as a result of more aggressively forcing
unknown to be something else.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] USER Profiles for PostgreSQL

2017-09-19 Thread Tom Lane
chiru r  writes:
> We are looking  for User profiles in ope source PostgreSQL.
> For example, If a  user password failed n+ times while login ,the user
> access has to be blocked few seconds.
> Please let us know, is there any plan to implement user profiles in feature
> releases?.

Not particularly.  You can do that sort of thing already via PAM,
for example.

        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] reload postgresql with invalid pg_hba.conf

2017-09-18 Thread Tom Lane
"David G. Johnston"  writes:
> On Mon, Sep 18, 2017 at 12:36 PM, Tom Lane  wrote:
>> I wouldn't say it's desired behavior, exactly, but there's no very
>> good way to improve it.  pg_ctl has no visibility into what the postmaster
>> is thinking.

> The function signature (and docs) for pg_reload_conf makes the OPs
> interpretation understandable.

Yeah.  After more thought, it occurs to me that now that we did f13ea95f9,
we could improve matters by extending that concept: the postmaster could
record its last reload time in postmaster.pid along with a success/failure
flag, and pg_ctl could watch that file to detect what happened.

(No, I'm not volunteering to write the patch.)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] reload postgresql with invalid pg_hba.conf

2017-09-18 Thread Tom Lane
jotpe  writes:
> A system administration applied an invalid line into the pg_hba.conf 
> file and called "service postgresql reload". Since that command doesn't 
> return any error and leaves with exit code 0 it seams that new 
> configuration was applied.

> Of course postgresql added a warning to the log file: "pg_hba.conf not 
> reloaded".
> But even when I execute SELECT pg_reload_conf(), true is returned.

> Is this the desired behavior?

I wouldn't say it's desired behavior, exactly, but there's no very
good way to improve it.  pg_ctl has no visibility into what the postmaster
is thinking.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Remove useless joins (VARCHAR vs TEXT)

2017-09-17 Thread Tom Lane
David Rowley  writes:
> On 17 September 2017 at 08:07, Kim Rose Carlsen  wrote:
>> It seems there are some difference in VARCHAR vs TEXT when postgres tries to
>> decide if a LEFT JOIN is useful or not.

> Yeah, it looks like the code to check for distinctness in the subquery
> fails to consider that the join condition may contain RelabelTypes
> instead of plain Vars.
> 
> The attached fixes.

Looks like a good fix to me (except for the copied-and-pasted,
not-quite-on-point comment ;-)).  Pushed.

        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cursor declare

2017-09-15 Thread Tom Lane
Peter Koukoulis  writes:
> This is my first cursor attempt:

> according to docs

> DECLARE
> curs1 refcursor;
> curs2 CURSOR FOR SELECT * FROM tenk1;
> curs3 CURSOR (key integer) FOR SELECT * FROM tenk1 WHERE unique1 = key;

> this should work, but getting error:

> ft_node=# declare cur_test1 CURSOR (key integer) for select * from test1
> where x=key;
> ERROR:  syntax error at or near "("
> LINE 1: declare cur_test1 CURSOR (key integer) for select * from tes...

It looks like you're trying to use the plpgsql syntax for a cursor
variable as part of a DECLARE CURSOR SQL-level command.  They're not
the same thing at all.  In particular, there isn't any concept of
parameters in the SQL DECLARE CURSOR command.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to add new Collation language

2017-09-14 Thread Tom Lane
Rob Northcott  writes:
> From: rob stone [mailto:floripa...@gmail.com] 
>> On Thu, 2017-09-14 at 11:30 +, Rob Northcott wrote:
>>> How can I add a collation language to a Postgres server?
>>> Specifically, I want to create a new database with collation of 
>>> English_United Kingdom.1252 but the only options are C, Posix and 
>>> United States.
>>> This is Postgres 9.6 on Windows server 2012.  The server is Swedish, 
>>> and my user login is set to UK.

>> initdb --locale=en_GB.UTF-8 . . . plus other init options as required.

> Thanks for the reply.  When I try that I get the message "invalid locale name"

en_GB.UTF-8 is a Unix-style locale name, it won't help you on Windows.
You need something like "English_United Kingdom.1252" on Windows.

I'm not sure what you mean by "the only options are" ... are you using
some GUI that only offers those options?  If so, this is a shortcoming
in that GUI.  Postgres itself should take whatever ctype/collation
settings the underlying system accepts.  You could try manually invoking
initdb as above, but with a Windows-style locale name.  Or if you want
to make a new database within an existing installation, use CREATE
DATABASE directly, setting the LC_COLLATE and LC_CTYPE options
(and selecting a matching ENCODING).

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Joining 16 tables seems slow

2017-09-12 Thread Tom Lane
"Frank Millman"  writes:
> Pavel Stehule wrote:
>> 2017-09-12 8:45 GMT+02:00 Frank Millman :
>>>   I am experimenting with optimising a SQL statement. One version uses 4 
>>> LEFT JOIN’s and a 5-way CASE statement in the body. The second moves the 
>>> filtering into the JOIN section, and I end up with 16 LEFT JOIN’s and no 
>>> CASE statements.

>> you can experimentally try increase FROM_COLLAPSE_LIMIT to some higher 
>> number 14 maybe 16

> I tried increasing FROM_COLLAPSE_LIMIT, but it made no difference.

For this you need to increase join_collapse_limit, not
from_collapse_limit.  (Usually, though, there's little reason not to keep
them the same.)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore error -- missing pg table

2017-09-11 Thread Tom Lane
""  writes:
> /opt/pgsql-9.5/bin/pg_restore -v --clean --create --dbname=nms 
> --schema=public .
> ...
> pg_restore: [archiver (db)] could not execute query: ERROR:  function 
> _pg_sv_column_array(oid, smallint[]) does not exist
> LINE 6: _pg_sv_column_array(k1.conrelid, k1.conkey) AS fk_column...
> ^
> HINT:  No function matches the given name and argument types. You might need 
> to add explicit type casts.

Hm, so what is _pg_sv_column_array()?  There's no built-in function by
that name.

The most likely theory is that pg_dump didn't dump that function, or
it did but pg_restore isn't restoring it, perhaps because of the --schema
restriction.  I'm not sure why the function name isn't showing up as
schema-qualified, though, if it isn't in the public schema.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] B-tree index on a VARCHAR(4000) column

2017-09-10 Thread Tom Lane
John Turner  writes:
> On Sun, Sep 10, 2017 at 10:42 AM Merlin Moncure  wrote:
>> Nope.  Memory usage is proportional to the size of the string, not the
>> maximum length for varchar.  Maximum length is a constraint.

> Ok, thanks for verifying.  I was curious since other platforms seem to
> handle this aspect of memory allocation differently (more crudely, perhaps)
> based on estimation of how fully populated the column _might_ be given a
> size constraint:
> https://sqlperformance.com/2017/06/sql-plan/performance-myths-oversizing-strings

AFAIR, the only way in which a different declared max column length would
affect Postgres' behavior like that is that, depending on what other
columns are in the table, it might be able to prove that it doesn't need
to create a "toast table" for the table, because no row in the table could
ever be wide enough to require toasting.  That would save a few
microseconds during table creation and drop ... but an unused toast table
that's just sitting there is surely not much overhead.

For every other purpose, PG just pays attention to the actual column
values' lengths.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Analyzing performance regression from 9.2 to 9.6

2017-09-09 Thread Tom Lane
Dave Peticolas  writes:
> Hi, I am trying to analyze a performance regression from 9.2.21 to 9.6.3.
> The query and execution plans are below with 9.6.3 first.

Hm.  Neither version is exactly covering itself with glory.  I'm not sure
why 9.6 doesn't pick the same plan as 9.2, but given the planner's
estimate that the report_submission/report_skilltype join will produce
only one row, the difference in the estimated costs would be negligible.
Since that join is actually producing over a million rows, your chances
of getting a good plan hinge on improving that estimate.

> I realize the query itself is probably not great and would benefit from a
> different approach, but I'd like to know if there are 9.6 settings I should
> look into to get a better plan without changing the query if possible.

Doubt it.  Am I right in guessing that report_submission.id is a
declared-not-null column, so that the join

 FROM "report_skilltype"
   LEFT OUTER JOIN "report_submission"
 ON ("report_skilltype"."uuid" = "report_submission"."skill_type_id")
   WHERE ... "report_submission"."id" IS NULL

should be understood as an anti-join?  The planner doesn't get that
at the moment, for implementation reasons that needn't concern us here.
But it would get it if you said

   WHERE ... "report_submission"."skill_type_id" IS NULL

i.e. constrain the join column to be null.  Any chance of whacking your
ORM upside the head to the point where it would emit that?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] B-tree index on a VARCHAR(4000) column

2017-09-08 Thread Tom Lane
Ron Johnson  writes:
> Based on LENGTH(offending_column), none of the values are more than 144 
> bytes in this 44.2M row table.  Even though VARCHAR is, by definition, 
> variable length, are there any internal design issues which would make 
> things more efficient if it were dropped to, for example, VARCHAR(256)?

No.

    regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WAL & ready files retained after turning off log shipping

2017-09-07 Thread Tom Lane
Ron Johnson  writes:
> On 09/07/2017 09:08 AM, Tom Lane wrote:
>> Manual cleanup shouldn't be very hard, fortunately.  Run pg_controldata
>> to see where the last checkpoint is, and delete WAL files whose names
>> indicate they are before that (but not the one including the checkpoint!).

> All WAL files after log shipping was stopped will keep accumulating "forever"?

Hmm ... on second thought, I think if you just remove the .ready/.done
files, the next checkpoint should clean up the old WAL files.  That'd
certainly be safer than doing it manually.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WAL & ready files retained after turning off log shipping

2017-09-07 Thread Tom Lane
Ron Johnson  writes:
> After disabling log shipping via setting "archive_mode = off", and then 
> running, "pg_ctl reload", old WAL files and their associated .ready files 
> aren't being deleted.

Hmm.  I might be misremembering, but I think that it's the archiver
process that is in charge of deleting those files, so that this behavior
doesn't seem surprising.

I don't think anybody's thought very hard about how to clean up if
archiving had been running and then you turn it off with not everything
having been archived.

Manual cleanup shouldn't be very hard, fortunately.  Run pg_controldata
to see where the last checkpoint is, and delete WAL files whose names
indicate they are before that (but not the one including the checkpoint!).
If you don't intend to do archiving any more, you can just flush all the
.ready files (and .done if any) without much thought.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   4   5   6   7   8   9   10   >