Re: Postgresql with JDK

2018-04-17 Thread Thomas Kellerer
vaibhav zaveri schrieb am 17.04.2018 um 08:47:
> Hi, 
> 
> Yes that is the link. 
> But is JDK 1.8 supported by PostgreSQL?? 
> 

Postgres itself does not need or "support" Java. 

Only the JDBC driver needs that which is a client-side technology.
And yes, the JDBC driver does support Java 8, which is clearly stated on the 
download page:

https://jdbc.postgresql.org/download.html

It supports PostgreSQL 8.2 or newer and requires Java 6 or newer. It contains 
support for SSL and the javax.sql package.

If you are using Java 8 or newer then you should use the JDBC 4.2 version.
If you are using Java 7 then you should use the JDBC 4.1 version.

Thomas






Re: To prefer sorts or filters in postgres, that is the question....

2018-04-17 Thread Bob Jones
>
> At a short glance, I'd say that they are pretty much the same.
> The filter and the top-1-sort will both require a single scan through
> the result set and one operation per row found.
> And the recursive queries are pretty similar, right?
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com


Thanks Laurenz.

After sending my original message, I did briefly reconsider things.

My current thinking is that the filter is a bit like an "fgrep" and
the sort actually requires memory allocation and some "real work", and
thus I've settled on the filter for now pending experiments with a
larger quantity of data.



RE: Re:Postgresql with JDK

2018-04-17 Thread Gao Jack
Hi,

yes, supported.

 > If you are using Java 8 or newer then you should use the JDBC 4.2 
version. < 
If you are using Java 7 then you should use the JDBC 4.1 version.
If you are using Java 6 then you should use the JDBC 4.0 version.
If you are using a Java version older than 6 then you will need to use a JDBC3 
version of the driver, which will by necessity not be current, found in Other 
Versions.
PostgreSQL JDBC 4.2 Driver, 42.2.2

--
Jack Gao
jackg...@outlook.com

> -Original Message-
> From: vaibhav zaveri 
> Sent: Tuesday, April 17, 2018 2:47 PM
> To: pgsql-gene...@postgresql.org
> Subject: Re:Postgresql with JDK
> 
> Hi,
> 
> Yes that is the link.
> But is JDK 1.8 supported by PostgreSQL??
> 
> Regards,
> Vaibhav Zaveri
> 
> On 17 Apr 2018 12:12 p.m., "Mail Delivery Subsystem"  dae...@googlemail.com  >
> wrote:
> 
> 
> 
> 
> Address not found
> 
> Your message wasn't delivered to pgsl-gene...@postgresql.org because the
> address couldn't be found, or is unable to receive mail.
> The response was:
> 
> 
> 550 unknown address
> 
> 
>   Final-Recipient: rfc822; pgsl-gene...@postgresql.org  gene...@postgresql.org>
>   Action: failed
>   Status: 5.0.0
>   Remote-MTA: dns; makus.postgresql.org
>  . (2001:4800:1501:1::229, the server for
>the domain postgresql.org  .)
>   Diagnostic-Code: smtp; 550 unknown address
>   Last-Attempt-Date: Mon, 16 Apr 2018 23:41:11 -0700 (PDT)
> 
> 
>   -- Forwarded message --
>   From: vaibhav zaveri   >
>   To: pgsl-gene...@postgresql.org  gene...@postgresql.org>
>   Cc:
>   Bcc:
>   Date: Tue, 17 Apr 2018 12:11:09 +0530
>   Subject: Fwd: RE: Which jdk version is supported by PostgreSQL
> 
>   Hi,
> 
>   Yes this link. But is JDK version 1.8 supported with PostgreSQL??
> 
>   Regards,
>   Vaibhav Zaveri
>   -- Forwarded message --
>   From: "Gao Jack"   >
>   Date: 17 Apr 2018 11:58 a.m.
>   Subject: RE: Which jdk version is supported by PostgreSQL
>   To: "vaibhav zaveri"   >, "pgsql-gene...@postgresql.org
>  "   >, "pgsql_gene...@postgresql.org
>  "   >
>   Cc:
> 
> 
> 
>   > -Original Message-
>   > From: vaibhav zaveri   >
>   > Sent: Tuesday, April 17, 2018 2:03 PM
>   > To: pgsql-gene...@postgresql.org  gene...@postgresql.org> ; pgsql_gene...@postgresql.org
> 
>   > Subject: Which jdk version is supported by PostgreSQL
>   >
>   > Hi,
>   >
>   > Which jdk version is supported by PostgreSQL?
>   >
>   >
>   > Regards,
>   > Vaibhav Zaveri
> 
>   Hi
> 
>   Do you mean jdbc?
> 
>   postgresql supports most versions of jdbc
> 
>   https://jdbc.postgresql.org/download.html
> 
> 
>   --
> 
>   Jack Gao
>   jackg...@outlook.com 
> 
> 
> 



Re: pg_dump to a remote server

2018-04-17 Thread Ron

On 04/16/2018 11:07 PM, Adrian Klaver wrote:

On 04/16/2018 06:43 PM, Ron wrote:



On 04/16/2018 07:18 PM, Adrian Klaver wrote:

On 04/16/2018 04:58 PM, Ron wrote:
We're upgrading from v8.4 to 9.6 on a new VM in a different DC.  The 
dump file will be more than 1TB, and there's not enough disk space on 
the current system for the dump file.


Thus, how can I send the pg_dump file directly to the new server while 
the pg_dump command is running?  NFS is one method, but are there 
others (netcat, rsync)?  Since it's within the same company, encryption 
is not required.


Maybe?:

pg_dump -d test -U postgres -Fc | ssh aklaver@arkansas 'cat > test_cat.out'


That looks promising.  I could then "pg_restore -jX".


More promising would be the suggestion from Michael Nolan:

https://www.postgresql.org/message-id/CAOzAqu%2BVpOfzBHwcqptSzm3PkeZAjkqqc0XqB%2BA-jBNioU6x%2Bg%40mail.gmail.com 



"Can you run pg_dump on the new server, connecting remotely to the current 
one?"


It eliminates two programs(ssh and cat) and a pipe.


Is that supported?

--
Angular momentum makes the world go 'round.



Re: To prefer sorts or filters in postgres, that is the question....

2018-04-17 Thread Laurenz Albe
Bob Jones wrote:
> My current thinking is that the filter is a bit like an "fgrep" and
> the sort actually requires memory allocation and some "real work", and
> thus I've settled on the filter for now pending experiments with a
> larger quantity of data.

That's fine.

A top-1-sort is less work than you maybe think:
You go through all items and find the biggest one.
So there is one comparison operator per row - very similar to
what happens when "grepping" for NULL values.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



RE: pg_dump to a remote server

2018-04-17 Thread Gao Jack
Hi Ron,

I have some pg_dump test result, for reference only 😊

--
[ENV]

Intel(R) Core(TM) i5-4250U CPU @ 1.30GHz  | SSD 120GB |  8G memory
(PostgreSQL) 9.6.8
--
[DATA]
my database has 7.2GB of random data:

postgres=# select pg_size_pretty(pg_database_size('postgres'));
 pg_size_pretty 

 7201 MB
(1 row)

--

[Test Results]

 command  | 
export_time | output_size 
-+-+--
 pg_dump postgres > outfile.sql| 16m23s  | 6.3 GB
 pg_dump postgres | gzip > outfile.gz  | 5m27s   | 2.4 GB
 pg_dump -Fc postgres > outfile.dump| 5m33s   | 2.4 GB
 pg_dump -Fc -Z 9 postgres > outfile.dump | 11m59s  | 2.4 GB
 pg_dump -Ft postgres > outfile.dump | 2m43s   | 6.3 GB
 pg_dump -Fd postgres -f dumpdir| 5m17s   | 2.4 GB
 pg_dump -Fd -j 4 postgres -f dumpdir | 2m50s   | 2.4 GB
(7 rows)

--
The smaller the amount of data transmitted over the network, the better.
You could try compressed export method like gzip, -Fc, -Ft, -Fd -j 4(faster).


--
Jack Gao
jackg...@outlook.com

> -Original Message-
> From: Ron 
> Sent: Tuesday, April 17, 2018 9:44 AM
> To: Adrian Klaver ; pgsql-general  gene...@postgresql.org>
> Subject: Re: pg_dump to a remote server
> 
> 
> 
> On 04/16/2018 07:18 PM, Adrian Klaver wrote:
> > On 04/16/2018 04:58 PM, Ron wrote:
> >> We're upgrading from v8.4 to 9.6 on a new VM in a different DC.  The
> dump
> >> file will be more than 1TB, and there's not enough disk space on the
> >> current system for the dump file.
> >>
> >> Thus, how can I send the pg_dump file directly to the new server while
> >> the pg_dump command is running?  NFS is one method, but are there
> others
> >> (netcat, rsync)?  Since it's within the same company, encryption is not
> >> required.
> >
> > Maybe?:
> >
> > pg_dump -d test -U postgres -Fc | ssh aklaver@arkansas 'cat > test_cat.out'
> 
> That looks promising.  I could then "pg_restore -jX".
> 
> --
> Angular momentum makes the world go 'round.



Re: dblink: give search_path

2018-04-17 Thread Thiemo Kellner, NHC Barhufpflege

Zitat von Adrian Klaver :


In addition to my previous suggestions:

test=# SELECT public.dblink_connect('dbname=production ');
 dblink_connect

 OK

test=# select * from  public.dblink('show search_path') as  
t1(search_path text);

 search_path
-
 main

test=# select public.dblink_exec('set search_path=main,utility');
 dblink_exec
-
 SET
(1 row)

test=# select * from  public.dblink('show search_path') as  
t1(search_path text);

  search_path
---
 main, utility



Hi Adrian. Thanks for pointing this out. I wonder why I did not think  
of it myself. However, I moved to a dedicated logging user such  
avoiding this Problem. I also seems a clean solution that way.


--
+49 (0)1578-772 37 37
+41 (0)78 947 36 21
Öffentlicher PGP-Schlüssel:  
http://pgp.mit.edu/pks/lookup?op=get&search=0x8F70EFD2D972CBEF



This message was sent using IMP, the Internet Messaging Program.



Re: psql variable to plpgsql?

2018-04-17 Thread Thiemo Kellner

Zitat von Pavel Stehule :


no. The :xxx is not evaluated inside string. The workaround is using GUC
variables and related functions. Can be used from psql and from plpgsql too.


Hi Pavel, thanks for pointing this out. However, I implemented another  
solution with dedicated PostgreSQL user where I do not need such  
variables in literals.


--
Öffentlicher PGP-Schlüssel:  
http://pgp.mit.edu/pks/lookup?op=get&search=0x8F70EFD2D972CBEF



This message was sent using IMP, the Internet Messaging Program.



RE: pg_dump to a remote server

2018-04-17 Thread Brent Wood
from the pg_dump docs...

...
-Z 0..9
--compress=0..9

   Specify the compression level to use. Zero means no compression. For the 
custom archive format, this specifies compression of individual table-data 
segments, and the default is to compress at a moderate level. For plain text 
output, setting a nonzero compression level causes the entire output file to be 
compressed, as though it had been fed through gzip; but the default is not to 
compress.
...

so perhaps running (on host2):
pg_dump -h host1 -Z 9 dbname | zcat | psql -h host2 dbname

will generate a compressed text output on host1, which is sent over the wire to 
host2 where it is locally uncompressed & fed into psql...

Brent Wood

Programme leader: Environmental Information Delivery
NIWA
DDI:  +64 (4) 3860529


Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529 | 301 Evans Bay Parade, Greta Point, Wellington | 
www.niwa.co.nz
[NIWA]
To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems.

From: Christoph Moench-Tegeder [c...@burggraben.net]
Sent: Tuesday, April 17, 2018 18:00
To: pgsql-general@lists.postgresql.org
Subject: Re: pg_dump to a remote server

## Ron (ronljohnso...@gmail.com):

> > pg_dump -h host1 dbname | psql -h host2 dbname
>
> But that assumes --format=plain which will send a whole lot of
> uncompressed text across the wire.

You can also use pg_restore with standard input, i.e. pg_dump | pg_restore.

Regards,
Christoph

--
Spare Space.






Re: pg_dump to a remote server

2018-04-17 Thread Adrian Klaver

On 04/17/2018 12:35 AM, Ron wrote:

On 04/16/2018 11:07 PM, Adrian Klaver wrote:

On 04/16/2018 06:43 PM, Ron wrote:





More promising would be the suggestion from Michael Nolan:

https://www.postgresql.org/message-id/CAOzAqu%2BVpOfzBHwcqptSzm3PkeZAjkqqc0XqB%2BA-jBNioU6x%2Bg%40mail.gmail.com 



"Can you run pg_dump on the new server, connecting remotely to the 
current one?"


It eliminates two programs(ssh and cat) and a pipe.


Is that supported?



Sure as long as pg_dump on the new server can reach -h and the 
pg_hba.conf for the current server is set up to allow connections from 
the remote client.


To test do something like:

new_server> pg_dump -t some_table -s -h current_server -f test_file.sql

--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Postgresql with JDK

2018-04-17 Thread Adrian Klaver

On 04/16/2018 11:47 PM, vaibhav zaveri wrote:

Hi,

Yes that is the link.
But is JDK 1.8 supported by PostgreSQL??


I believe it one of those version numbering things, per:

https://en.wikipedia.org/wiki/Java_version_history#Versioning_change

"Versioning change

This version introduced a new versioning system for the Java language, 
although the old versioning system continued to be used for developer 
libraries:


Both version numbers "1.5.0" and "5.0" are used to identify this 
release of the Java 2 Platform Standard Edition. Version "5.0" is the 
product version, while "1.5.0" is the developer version. The number 
"5.0" is used to better reflect the level of maturity, stability, 
scalability and security of the J2SE.

— Version 1.5.0 or 5.0?[23]

This correspondence continued through later releases (Java 6 = JDK 1.6, 
Java 7 = JDK 1.7, and so on)."


So what you are seeing here:

https://jdbc.postgresql.org/download.html

is JDK 8 which is actually equal to JDK 1.8.



Regards,
Vaibhav Zaveri



--
Adrian Klaver
adrian.kla...@aklaver.com



Can PostgreSQL create new WAL files instead of reusing old ones?

2018-04-17 Thread David Pacheco
tl;dr: We've found that under many conditions, PostgreSQL's re-use of old
WAL
files appears to significantly degrade query latency on ZFS.  The reason is
complicated and I have details below.  Has it been considered to make this
behavior tunable, to cause PostgreSQL to always create new WAL files
instead of
re-using old ones?

Context: we're running a large fleet of PostgreSQL shards.  Each shard
consists
of a primary, a synchronous standby, and an asynchronous standby using
chained
replication.  For this problem, we can consider only the primary and
synchronous standby.

PostgreSQL: 9.6.3
OS: illumos (SmartOS, mixed versions, but all from 2017 or later)
FS: ZFS over mirrored HDDs (not SSDs), using a record size of 8K to match
PostgreSQL's record size.  We have an SSD log device for completing
synchronous
writes very quickly.
WAL files are 16MB each, and we're keeping a very large number of segments.
(There's likely a lot of improvement for WAL tuning here.)

Since we're using an 8K record size, when PostgreSQL writes small (or
non-aligned) records to the WAL files, ZFS has to read the old contents in
order to write the new 8K record (read-modify-write).  If that's in cache,
that's not a big deal.  But when PostgreSQL decides to reuse an old WAL file
whose contents have been evicted from the cache (because they haven't been
used
in hours), this turns what should be a workload bottlenecked by synchronous
write
performance (that's well-optimized with our SSD log device) into a random
read
workload (that's much more expensive for any system).

What's significantly worse is that we saw this on synchronous standbys.
When
that happened, the WAL receiver was blocked on a random read from disk, and
since it's single-threaded, all write queries on the primary stop until the
random read finishes.  This is particularly bad for us when the sync is
doing
other I/O (e.g., for an autovacuum or a database backup) that causes disk
reads
to take hundreds of milliseconds.

Reusing old WAL files seems like an optimization intended for filesystems
that
allocate disk blocks up front.  With copy-on-write, it doesn't seem to make
much sense.  If instead of using an old WAL file, PostgreSQL instead just
created a new one, there would be no random reads required to complete these
operations, and we believe we'd completely eliminate our latency outliers.

Thanks,
Dave


Re: Can PostgreSQL create new WAL files instead of reusing old ones?

2018-04-17 Thread Alvaro Herrera
David Pacheco wrote:
> tl;dr: We've found that under many conditions, PostgreSQL's re-use of old
> WAL
> files appears to significantly degrade query latency on ZFS.  The reason is
> complicated and I have details below.  Has it been considered to make this
> behavior tunable, to cause PostgreSQL to always create new WAL files
> instead of re-using old ones?

I don't think this has ever been proposed, because there was no use case
for it.  Maybe you want to work on a patch for it?

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Can PostgreSQL create new WAL files instead of reusing old ones?

2018-04-17 Thread Tom Lane
Alvaro Herrera  writes:
> David Pacheco wrote:
>> tl;dr: We've found that under many conditions, PostgreSQL's re-use of old
>> WAL
>> files appears to significantly degrade query latency on ZFS.  The reason is
>> complicated and I have details below.  Has it been considered to make this
>> behavior tunable, to cause PostgreSQL to always create new WAL files
>> instead of re-using old ones?

> I don't think this has ever been proposed, because there was no use case
> for it.  Maybe you want to work on a patch for it?

I think possibly the OP doesn't understand why it's designed that way.
The point is not really to "recycle old WAL files", it's to avoid having
disk space allocation occur during the critical section where we must
PANIC on failure.  Now, of course, that doesn't really work if the
filesystem is COW underneath, because it's allocating fresh disk space
anyway even though semantically we're overwriting existing data.
But what I'd like to see is a fix that deals with that somehow, rather
than continue to accept the possibility of ENOSPC occurring inside WAL
writes on these file systems.  I have no idea what such a fix would
look like :-(

regards, tom lane



Re: Can PostgreSQL create new WAL files instead of reusing old ones?

2018-04-17 Thread David Pacheco
On Tue, Apr 17, 2018 at 11:57 AM, Tom Lane  wrote:

> Alvaro Herrera  writes:
> > David Pacheco wrote:
> >> tl;dr: We've found that under many conditions, PostgreSQL's re-use of
> old
> >> WAL
> >> files appears to significantly degrade query latency on ZFS.  The
> reason is
> >> complicated and I have details below.  Has it been considered to make
> this
> >> behavior tunable, to cause PostgreSQL to always create new WAL files
> >> instead of re-using old ones?
>
> > I don't think this has ever been proposed, because there was no use case
> > for it.  Maybe you want to work on a patch for it?
>
> I think possibly the OP doesn't understand why it's designed that way.
> The point is not really to "recycle old WAL files", it's to avoid having
> disk space allocation occur during the critical section where we must
> PANIC on failure.  Now, of course, that doesn't really work if the
> filesystem is COW underneath, because it's allocating fresh disk space
> anyway even though semantically we're overwriting existing data.
> But what I'd like to see is a fix that deals with that somehow, rather
> than continue to accept the possibility of ENOSPC occurring inside WAL
> writes on these file systems.  I have no idea what such a fix would
> look like :-(



I think I do understand, but as you've observed, recycling WAL files to
avoid allocation relies on the implementation details of the filesystem --
details that I'd expect not to be true of any copy-on-write filesystem.  On
such systems, there may not be a way to avoid ENOSPC in special critical
sections.  (And that's not necessarily such a big deal -- to paraphrase a
colleague, ensuring that the system doesn't run out of space does not seem
like a particularly surprising or heavy burden for the operator.  It's
great that PostgreSQL can survive this event better on some systems, but
the associated tradeoffs may not be worthwhile for everybody.)  And given
that, it seems worthwhile to provide the operator an option where they take
on the risk that the database might crash if it runs out of space (assuming
the result isn't data corruption) in exchange for a potentially tremendous
improvement in tail latency and overall throughput.

To quantify this: in a recent incident, transaction latency on the primary
was degraded about 2-3x (from a p90 of about 45ms to upwards of 120ms, with
outliers exceeding 1s).  Over 95% of the outliers above 1s spent over 90%
of their time blocked on synchronous replication (based on tracing with
DTrace).  On the synchronous standby, almost 10% of the WAL receiver's wall
clock time was spent blocked on disk reads in this read-modify-write path.
The rest of the time was essentially idle -- there was plenty of headroom
in other dimensions (CPU, synchronous write performance).

Thanks,
Dave


rehashing catalog cache id 14 for pg_opclass; 17 tups, 8 buckets

2018-04-17 Thread Thiemo Kellner

Hi all

When running following query in psql (server and client version 10 with 
replication on Debian 9), I get the message mentioned in the subject. I 
have not found much searching the internet. There were suggestions on 
bloat so I ran "vacuum (verbose, full, analyze)" but the message remains.


with PRO_UNNESTED_TYPES as(
  select
oid as PROOID,
PRONAME,
unnest(PROARGTYPES) as PROARGTYPE,
PRONAMESPACE,
PROOWNER
  from
PG_CATALOG.PG_PROC
) select
  P.PRONAME,
  string_agg(
T.TYPNAME,
', '
  ) as PARAMETER_LIST_STRING,
  G.GRANTEE
from
  PRO_UNNESTED_TYPES P
inner join PG_CATALOG.PG_TYPE T on
  P.PROARGTYPE = T.OID
inner join PG_CATALOG.PG_NAMESPACE N on
  P.PRONAMESPACE = N.OID
inner join INFORMATION_SCHEMA.ROUTINE_PRIVILEGES G on
  -- copied from INFORMATION_SCHEMA.ROUTINE_PRIVILEGES source
 -- as seen in DBeaver 4.3.2
(
(
  P.PRONAME::text || '_'::text
)|| P.PROOID::text
  )::INFORMATION_SCHEMA.SQL_IDENTIFIER = G.SPECIFIC_NAME
where
  N.NSPNAME = current_user
  and G.GRANTEE != current_user
group by
  P.PROOID,
  P.PRONAME,
  G.GRANTEE
order by
  P.PRONAME asc,
  G.GRANTEE asc,
  PARAMETER_LIST_STRING asc;

I use this installation to develop and for the time being I install and 
re-install a couple of functions only 3 tables an a single view. I 
install in a proper schema which gets re-created at the beginning of my 
install script.


I ran also following statement I found on the net to get an idea on 
bloat in my database.


pg_depend_reference_index   944 kB
pg_proc 904 kB
pg_depend_depender_index880 kB
pg_largeobject_metadata_oid_index   8192 bytes
pg_enum_typid_sortorder_index   8192 bytes
pg_enum_typid_label_index   8192 bytes
pg_largeobject_loid_pn_index8192 bytes
pg_enum_oid_index   8192 bytes
pg_statistic_ext_oid_index  8192 bytes
pg_statistic_ext_name_index 8192 bytes

I am quite ok with pg_proc, however I do not know why the depend tables 
are so big and whether this is normal. The rest is fine by me too.


select
  relname,
  pg_size_pretty(
pg_relation_size(C.oid)
  )
from
  pg_class C
left join pg_namespace N on
  (
N.oid = C.relnamespace
  )
where
  nspname = 'pg_catalog'
order by
  2 desc limit 10;

I do not feel that my DB has a problem but I was taken aback a bit when 
I first saw the message in the subject.


I would be grateful about a bit shed light.

Kind regards Thiemo

--
Öffentlicher PGP-Schlüssel:
http://pgp.mit.edu/pks/lookup?op=get&search=0xCA167FB0E717AFFC
<>

array UNNESTed to rows stable with respect to order?

2018-04-17 Thread Thiemo Kellner

Hi all

I have created following statement to get the ordered parameter list of 
functions. I use UNNEST to get rows from array. This works fine but I am 
not sure whether the ordering remains in the later use. Background is 
PL/pgSQL to revoke grants to get a pristine start for granting. As the 
order of the parameter is important, I should use a statement that 
returns the properly ordered list of parameters. Maybe I did take a 
wrong turn and one can achieve this simpler.


Suggestions are very welcome.

Kind regards Thiemo

with PRO_UNNESTED_TYPES as(
  select
oid as PROOID,
PRONAME,
unnest(PROARGTYPES) as PROARGTYPE,
PRONAMESPACE,
PROOWNER
  from
PG_CATALOG.PG_PROC
) select
  P.PRONAME,
  string_agg(
T.TYPNAME,
', '
  ) as PARAMETER_LIST_STRING,
  G.GRANTEE
from
  PRO_UNNESTED_TYPES P
inner join PG_CATALOG.PG_TYPE T on
  P.PROARGTYPE = T.OID
inner join PG_CATALOG.PG_NAMESPACE N on
  P.PRONAMESPACE = N.OID
inner join INFORMATION_SCHEMA.ROUTINE_PRIVILEGES G on
  -- copied from INFORMATION_SCHEMA.ROUTINE_PRIVILEGES source
 -- as seen in DBeaver 4.3.2
(
(
  P.PRONAME::text || '_'::text
)|| P.PROOID::text
  )::INFORMATION_SCHEMA.SQL_IDENTIFIER = G.SPECIFIC_NAME
where
  N.NSPNAME = current_user
  and G.GRANTEE != current_user
group by
  P.PROOID,
  P.PRONAME,
  G.GRANTEE
order by
  P.PRONAME asc,
  G.GRANTEE asc,
  PARAMETER_LIST_STRING asc;



--
Öffentlicher PGP-Schlüssel:
http://pgp.mit.edu/pks/lookup?op=get&search=0xCA167FB0E717AFFC
<>

Re: array UNNESTed to rows stable with respect to order?

2018-04-17 Thread David G. Johnston
On Tue, Apr 17, 2018 at 1:20 PM, Thiemo Kellner  wrote:

> This works fine but I am not sure whether the ordering remains in the
> later use.


​It does not.  If the array is not naturally ordered you will want to
attach a "with ordinality" clause to it for performing future ordering.

select * from unnest(ARRAY[3,6,4]::integer[]) with ordinality

Use LATERAL to move the unnest from the select-list section to the FROM
clause.

​David J.


Re: array UNNESTed to rows stable with respect to order?

2018-04-17 Thread Paul Jungwirth

On 04/17/2018 01:20 PM, Thiemo Kellner wrote:

 I use UNNEST to get rows from array. This works fine but I am
not sure whether the ordering remains in the later use.


I think you are looking for `WITH ORDINALITY` (in pg 9.4+). For instance 
you could rewrite your first CTE like so:


SELECT  oid as PROOID,
PRONAME,
t as PROARGTYPE,
i,
PRONAMESPACE,
PROOWNER
FROMPG_CATALOG.PG_PROC,
UNNEST(PROARGTYPES) WITH ORDINALITY AS proargtypes(t, i)
;

Yours,

--
Paul  ~{:-)
p...@illuminatedcomputing.com



Re: rehashing catalog cache id 14 for pg_opclass; 17 tups, 8 buckets

2018-04-17 Thread Adrian Klaver

On 04/17/2018 01:14 PM, Thiemo Kellner wrote:

Hi all

When running following query in psql (server and client version 10 with 
replication on Debian 9), I get the message mentioned in the subject. I 
have not found much searching the internet. There were suggestions on 
bloat so I ran "vacuum (verbose, full, analyze)" but the message remains.






I do not feel that my DB has a problem but I was taken aback a bit when 
I first saw the message in the subject.


I would be grateful about a bit shed light.


What do you have your log levels set to?

The only place I can find this message is in:

src/backend/utils/cache/catcache.c

elog(DEBUG1, "rehashing catalog cache id %d for %s; %d tups, %d buckets",
 cp->id, cp->cc_relname, cp->cc_ntup, cp->cc_nbuckets);

If I am following correctly it is fairly low level message.



Kind regards Thiemo




--
Adrian Klaver
adrian.kla...@aklaver.com



Old active connections?

2018-04-17 Thread Hans Sebastian
Hello group,

We run postgresql 10.3 for a python django app with gunicorn on nginx with
django version 1.9.5.

Recently, we started noticing there are many active connections from the
django app server that are more than 1 week old still showing in
pg_stat_activity.

Even though the django server has been stopped (all processes killed), the
active connections still persist. All of these connections are UPDATE
queries that look pretty normal.

Does anyone know the reasons they could be there? What could have caused
them being still active?

This has become an issue as we started getting "FATAL:  remaining
connection slots are reserved for non-replication superuser connections"

Thanks,
-hans


Re: Old active connections?

2018-04-17 Thread Adrian Klaver

On 04/17/2018 05:02 PM, Hans Sebastian wrote:

Hello group,

We run postgresql 10.3 for a python django app with gunicorn on nginx 
with django version 1.9.5.


Recently, we started noticing there are many active connections from the 
django app server that are more than 1 week old still showing in 
pg_stat_activity.


Even though the django server has been stopped (all processes killed), 
the active connections still persist. All of these connections are 
UPDATE queries that look pretty normal.


Are sure they are coming from Django?



Does anyone know the reasons they could be there? What could have caused 
them being still active?


Can we see the data from pg_stat_activity for those queries?



This has become an issue as we started getting "FATAL:  remaining 
connection slots are reserved for non-replication superuser connections"


Thanks,
-hans




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Old active connections?

2018-04-17 Thread David G. Johnston
On Tue, Apr 17, 2018 at 5:02 PM, Hans Sebastian  wrote:

> Hello group,
>
> We run postgresql 10.3 for a python django app with gunicorn on nginx with
> django version 1.9.5.​
>

> Recently, we started noticing there are many active connections from the
> django app server that are more than 1 week old still showing in
> pg_stat_activity.
>

​There are quite a few timestamp columns, and a state field, in that view -
you should show some example records instead of leaving people to guess
whether you are presenting an accurate interpretation of the data.

Even though the django server has been stopped (all processes killed), the
> active connections still persist. All of these connections are UPDATE
> queries that look pretty normal.
>
> Does anyone know the reasons they could be there? What could have caused
> them being still active?
>
> This has become an issue as we started getting "FATAL:  remaining
> connection slots are reserved for non-replication superuser connections"
>
>
​Some live process somewhere seems to be keeping an open session with the
PostgreSQL service...
​
Long-lived non-idle statements would likely be waiting for a lock to be
released.

David J.


Re: Old active connections?

2018-04-17 Thread Michael Paquier
On Tue, Apr 17, 2018 at 05:11:10PM -0700, David G. Johnston wrote:
> Long-lived non-idle statements would likely be waiting for a lock to be
> released.

Be very careful with transactions marked as "idle in transaction" for a
long time.  Long-running transactions prevent VACUUM to do its work as
the oldest XID in view is not updated, causing performance to go down,
and bloat to go up.
--
Michael


signature.asc
Description: PGP signature


Re: Can PostgreSQL create new WAL files instead of reusing old ones?

2018-04-17 Thread Michael Paquier
On Tue, Apr 17, 2018 at 02:57:03PM -0400, Tom Lane wrote:
> I think possibly the OP doesn't understand why it's designed that way.
> The point is not really to "recycle old WAL files", it's to avoid having
> disk space allocation occur during the critical section where we must
> PANIC on failure.  Now, of course, that doesn't really work if the
> filesystem is COW underneath, because it's allocating fresh disk space
> anyway even though semantically we're overwriting existing data.
> But what I'd like to see is a fix that deals with that somehow, rather
> than continue to accept the possibility of ENOSPC occurring inside WAL
> writes on these file systems.  I have no idea what such a fix would
> look like :-(

That looks like a rather difficult problem to solve in PostgreSQL
itself, as the operator running the cluster is in charge of setting up
the FS options which would control the COW behavior, so it seems to me
that there is room as well for an in-core option to tell the
checkpointer to enforce the removal of past files instead of simple
recycling them, because this actually breaks max_wal_size.

max_wal_size is of course a soft limit, and it has been discussed a
couple of times that it would be nice to get that to a hard limit, but
it is really a hard problem to avoid the system to not slow down or even
stop its I/O if the hard is close by or reached..
--
Michael


signature.asc
Description: PGP signature


Re: array UNNESTed to rows stable with respect to order?

2018-04-17 Thread Thiemo Kellner, NHC Barhufpflege

Zitat von "David G. Johnston" :


?It does not.  If the array is not naturally ordered you will want to
attach a "with ordinality" clause to it for performing future ordering.


Thanks for the hints.

Kind regards

--
Öffentlicher PGP-Schlüssel:  
http://pgp.mit.edu/pks/lookup?op=get&search=0x8F70EFD2D972CBEF



This message was sent using IMP, the Internet Messaging Program.



Re: array UNNESTed to rows stable with respect to order?

2018-04-17 Thread Thiemo Kellner

Zitat von Paul Jungwirth :

I think you are looking for `WITH ORDINALITY` (in pg 9.4+). For  
instance you could rewrite your first CTE like so:


Thanks for the hint.

Kind regards

--
Öffentlicher PGP-Schlüssel:  
http://pgp.mit.edu/pks/lookup?op=get&search=0x8F70EFD2D972CBEF



This message was sent using IMP, the Internet Messaging Program.



Re: rehashing catalog cache id 14 for pg_opclass; 17 tups, 8 buckets

2018-04-17 Thread Thiemo Kellner, NHC Barhufpflege

Zitat von Adrian Klaver :


What do you have your log levels set to?


Thanks for pointing this out. I put client level to debug1. So, I am  
just lucky not to have got flooded with Messages?



--
Öffentlicher PGP-Schlüssel:  
http://pgp.mit.edu/pks/lookup?op=get&search=0x8F70EFD2D972CBEF



This message was sent using IMP, the Internet Messaging Program.



Re: rehashing catalog cache id 14 for pg_opclass; 17 tups, 8 buckets

2018-04-17 Thread Laurenz Albe
Thiemo Kellner, NHC Barhufpflege wrote:
> > What do you have your log levels set to?
> 
> Thanks for pointing this out. I put client level to debug1. So, I am  
> just lucky not to have got flooded with Messages?

Sort of. This is a normal operation and should not worry you.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com