VM Instance to Google Cloud SQL Migration

2018-11-14 Thread Sathish Kumar
Hi Team,

We would like to migrate our Postgresql VM instance on Google Cloud
Platform to Google Cloud SQL with a minimal downtime. As I checked, we have
to export and import the SQL file and our database size is large and cannot
afford longer downtime.

Do any have solution to achieve this?.

Thanks & Regards,
Sathish Kumar.V


Re: Impact on PostgreSQL due to Redhat acquisition by IBM

2018-11-14 Thread Sachin Kotwal
On Wed, Nov 14, 2018 at 7:08 PM Adrian Klaver 
wrote:

> On 11/13/18 11:04 PM, Sachin Kotwal wrote:
> > Hi PostgreSQL lovers,
> >
> > I heard news that Redhat is going to acquired by IBM. IBM has its on
> > database. And they have history of selling applications with their own
> > hardware (software and hardware together).
> >
> > As per my knowledge PostgreSQL community has better support for Redhat
> > family than any other platforms.
>
> Not seeing it:
>
> https://www.postgresql.org/download/
>
> >
> > Does community is going to support/focus more Debian platforms ?
> > Does community has any plan to switch their main supported platform?
>
> There is no main supported platform. There is the source and there are
> packaging built off the source. The packaging is done by a diverse group
> of people. So for example the RPMS:
>
> https://yum.postgresql.org/contact.php
>
> Devrim Gündüz   EDB
>
> Jeff Frost  PGX
>
> Craig Ringer2ndQuadrant
>
>
> > Please share if any other plan.
> >
> >
> > --
> >
> > Thanks and Regards,
> > Sachin Kotwal
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Looks like no one have clear idea yet and deal also not completely done.
Hope Redhat community support will continue same as earlier.

Let's wait until something will announce by community is coming days.

Thanks all for your inputs.

Regards,
Sachin

-- 

Thanks and Regards,
Sachin Kotwal


Re: libpq to JDBC adapter

2018-11-14 Thread Dave Cramer
Looks very interesting,

Cheers,

Dave Cramer

da...@postgresintl.com
www.postgresintl.com


On Wed, 14 Nov 2018 at 14:57, Konstantin Knizhnik 
wrote:

> If somebody is interested in connection to various JDBC-compatible
> databases through postgres_fdw,
> please look at my pq2jdbc project: https://github.com/postgrespro/pq2jdbc
> Details of the project are in README file.
>
> If somebody can find some other use cases for libpq to JDBC adapter,
> please let me know!
>
> --
> Konstantin Knizhnik
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>
>
>


Re: Java UnsatisfiedLinkError exception when connecting to Postgresql database

2018-11-14 Thread Rob Sargent



On 11/14/18 5:03 PM, dcl...@cinci.rr.com wrote:

 Adrian Klaver  wrote:

On 11/14/18 10:24 AM, dcl...@cinci.rr.com wrote:
Please reply to list also.
Ccing list.

 Adrian Klaver  wrote:

On 11/14/18 9:25 AM, dcl...@cinci.rr.com wrote:

Hello;

I've written a Java program which uses Postgresql via JDBC.  The program works 
fine on all RedHat systems I've tested except one, where it yields an 
UnsatisifiedLinkError.  Here is the stack trace:

sun.misc.VM.latestUserDefinedLoader0(Native
Method)
sun.misc.VM.latestUserDefinedLoader(VM.java:411)
java.io.ObjectInputStream.latestUserDefinedLoader(ObjectInputStream.java:2351)
java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:686)
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1866)
java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1749)
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2040)
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
org.postgresql.ds.common.BaseDataSource.readBaseObject(BaseDataSource.java:1210)
org.postgresql.ds.common.BaseDataSource.initializeFrom(BaseDataSource.java:1220)
org.postgresql.ds.PGPoolingDataSource.initialize(PGPoolingDataSource.java:267)
org.postgresql.ds.PGPoolingDataSource.getConnection(PGPoolingDataSource.java:324)

Any ideas?

What is different about the system that throws the error?

For example:

OS version
JDBC version
Postgres version
Java version

Thank you for your reply.

OS on working system: Linux 3.10.0-693.11.6.el7.x86_64 x86_64
OS on problem system: Linux 3.10.0-693.21.1.el7.x86_64 x86_64

JDBC version on both systems: 9.4.1209

Postgres version on both systems: 9.6.5 on x86_64-redhat-linux-gnu, compiled by 
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit

Java version on both systems:
   
openjdk version "1.8.0_171"

OpenJDK Runtime Environment (build 1.8.0_171-b10)
OpenJDK 64-Bit Server VM (build 25.171-b10, mixed mode)


Hmm.
So what was the UnsatisifiedLinkError message, before the traceback above?

java.lang.UnsatisfiedLinkError: 
sun.misc.VM.latestUserDefinedLoader0()Ljava/lang/ClassLoader;

Thank you.



Should OpenJDK be looking for a sun class?





Re: Java UnsatisfiedLinkError exception when connecting to Postgresql database

2018-11-14 Thread dclark


 Adrian Klaver  wrote: 
> On 11/14/18 10:24 AM, dcl...@cinci.rr.com wrote:
> Please reply to list also.
> Ccing list.
> > 
> >  Adrian Klaver  wrote:
> >> On 11/14/18 9:25 AM, dcl...@cinci.rr.com wrote:
> >>> Hello;
> >>>
> >>> I've written a Java program which uses Postgresql via JDBC.  The program 
> >>> works fine on all RedHat systems I've tested except one, where it yields 
> >>> an UnsatisifiedLinkError.  Here is the stack trace:
> >>>
> >>> sun.misc.VM.latestUserDefinedLoader0(Native
> >>> Method)
> >>> sun.misc.VM.latestUserDefinedLoader(VM.java:411)
> >>> java.io.ObjectInputStream.latestUserDefinedLoader(ObjectInputStream.java:2351)
> >>> java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:686)
> >>> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1866)
> >>> java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1749)
> >>> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2040)
> >>> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
> >>> java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
> >>> org.postgresql.ds.common.BaseDataSource.readBaseObject(BaseDataSource.java:1210)
> >>> org.postgresql.ds.common.BaseDataSource.initializeFrom(BaseDataSource.java:1220)
> >>> org.postgresql.ds.PGPoolingDataSource.initialize(PGPoolingDataSource.java:267)
> >>> org.postgresql.ds.PGPoolingDataSource.getConnection(PGPoolingDataSource.java:324)
> >>>
> >>> Any ideas?
> >>
> >> What is different about the system that throws the error?
> >>
> >> For example:
> >>
> >> OS version
> >> JDBC version
> >> Postgres version
> >> Java version
> > 
> > Thank you for your reply.
> > 
> > OS on working system: Linux 3.10.0-693.11.6.el7.x86_64 x86_64
> > OS on problem system: Linux 3.10.0-693.21.1.el7.x86_64 x86_64
> > 
> > JDBC version on both systems: 9.4.1209
> > 
> > Postgres version on both systems: 9.6.5 on x86_64-redhat-linux-gnu, 
> > compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit
> > 
> > Java version on both systems:
> >   
> > openjdk version "1.8.0_171"
> > OpenJDK Runtime Environment (build 1.8.0_171-b10)
> > OpenJDK 64-Bit Server VM (build 25.171-b10, mixed mode)
> > 
> 
> Hmm.
> So what was the UnsatisifiedLinkError message, before the traceback above?

java.lang.UnsatisfiedLinkError: 
sun.misc.VM.latestUserDefinedLoader0()Ljava/lang/ClassLoader;

Thank you.




Re: Java UnsatisfiedLinkError exception when connecting to Postgresql database

2018-11-14 Thread Adrian Klaver

On 11/14/18 10:24 AM, dcl...@cinci.rr.com wrote:
Please reply to list also.
Ccing list.


 Adrian Klaver  wrote:

On 11/14/18 9:25 AM, dcl...@cinci.rr.com wrote:

Hello;

I've written a Java program which uses Postgresql via JDBC.  The program works 
fine on all RedHat systems I've tested except one, where it yields an 
UnsatisifiedLinkError.  Here is the stack trace:

sun.misc.VM.latestUserDefinedLoader0(Native
Method)
sun.misc.VM.latestUserDefinedLoader(VM.java:411)
java.io.ObjectInputStream.latestUserDefinedLoader(ObjectInputStream.java:2351)
java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:686)
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1866)
java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1749)
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2040)
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
org.postgresql.ds.common.BaseDataSource.readBaseObject(BaseDataSource.java:1210)
org.postgresql.ds.common.BaseDataSource.initializeFrom(BaseDataSource.java:1220)
org.postgresql.ds.PGPoolingDataSource.initialize(PGPoolingDataSource.java:267)
org.postgresql.ds.PGPoolingDataSource.getConnection(PGPoolingDataSource.java:324)

Any ideas?


What is different about the system that throws the error?

For example:

OS version
JDBC version
Postgres version
Java version


Thank you for your reply.

OS on working system: Linux 3.10.0-693.11.6.el7.x86_64 x86_64
OS on problem system: Linux 3.10.0-693.21.1.el7.x86_64 x86_64

JDBC version on both systems: 9.4.1209

Postgres version on both systems: 9.6.5 on x86_64-redhat-linux-gnu, compiled by 
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit

Java version on both systems:
  
openjdk version "1.8.0_171"

OpenJDK Runtime Environment (build 1.8.0_171-b10)
OpenJDK 64-Bit Server VM (build 25.171-b10, mixed mode)



Hmm.
So what was the UnsatisifiedLinkError message, before the traceback above?


--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Default Privilege Table ANY ROLE

2018-11-14 Thread Nicolas Paris
On Wed, Nov 14, 2018 at 03:53:39PM -0500, Tom Lane wrote:
> Maybe I'm missing something, but doesn't this solve your problem
> as stated?
> 
> ALTER DEFAULT PRIVILEGES IN SCHEMA myschema GRANT SELECT ON TABLES TO public;


Not sure that's equivalent to what I am looking for below (but is not allowed):

> ALTER DEFAULT PRIVILEGES  FOR  ROLE  *.* IN SCHEMA "myschema" GRANT  select 
> ON TABLES TO "myuser"

-- 
nicolas



Re: pg_dump out of memory for large table with LOB

2018-11-14 Thread Ron

On 11/14/2018 11:14 AM, Jean-Marc Lessard wrote:

Adrien Nayrat wrote:
> With 17 million LO, it could eat lot of memory ;)
Yes it does.

I did several tests and here are my observations.

First memory settings are:
shared_buffers = 3GB
work_mem = 32Mb
maintenance_work_mem = 1GB
effective_cache_size = 9MB
bytea_output = 'escape'


Why escape instead of hex?


--
Angular momentum makes the world go 'round.


Re: pg_dump out of memory for large table with LOB

2018-11-14 Thread Tom Lane
Jean-Marc Lessard  writes:
> Would you recommend bytea over LOB considering that the max LOB size is well 
> bellow 1GB?

Yes, probably.  The reason that pg_dump has trouble with lots of small
BLOBs is the 9.0-era decision to treat BLOBs as independent objects
having their own owners, privilege attributes, and archive TOC entries
--- it's really the per-BLOB TOC entries that are causing the issue
for you here.  That model is fine as long as BLOBs are, uh, large.
If you're using them as replacements for bytea, the overhead is going
to be prohibitive.

regards, tom lane



Re: Default Privilege Table ANY ROLE

2018-11-14 Thread Tom Lane
Maybe I'm missing something, but doesn't this solve your problem
as stated?

ALTER DEFAULT PRIVILEGES IN SCHEMA myschema GRANT SELECT ON TABLES TO public;

regards, tom lane



Re: Default Privilege Table ANY ROLE

2018-11-14 Thread Nicolas Paris
On Wed, Nov 14, 2018 at 09:04:44PM +0100, Laurenz Albe wrote:
> Nicolas Paris wrote:
> > I d'like my user be able to select on any new table from other users.
> > 
> > > ALTER DEFAULT PRIVILEGES  FOR  ROLE "theowner1"  IN SCHEMA "myschema" 
> > > GRANT  select ON TABLES TO "myuser"
> > > ALTER DEFAULT PRIVILEGES  FOR  ROLE "theowner2"  IN SCHEMA "myschema" 
> > > GRANT  select ON TABLES TO "myuser"
> > > ...
> > 
> > 
> > Do I really have to repeat the command for all users ?
> > 
> > The problem is I have many user able to create tables and all of them
> > have to read each other. 
> 
> Now whenever "alice" has to create a table, she runs
> SET ROLE tableowner;
> Then all these tables belong to "tableowner", and each user in group 
> "tablereader"
> can SELECT from them:

Yes, this step is overhead to me:
> SET ROLE tableowner;

In my mind, both bob/alice inherit from the same group, so they should
share the table they build according to this:

> ALTER DEFAULT PRIVILEGES FOR ROLE tableowner IN SCHEMA myschema GRANT SELECT 
> ON TABLES TO tablereader;




-- 
nicolas



Re: Default Privilege Table ANY ROLE

2018-11-14 Thread Laurenz Albe
Nicolas Paris wrote:
> I d'like my user be able to select on any new table from other users.
> 
> > ALTER DEFAULT PRIVILEGES  FOR  ROLE "theowner1"  IN SCHEMA "myschema" GRANT 
> >  select ON TABLES TO "myuser"
> > ALTER DEFAULT PRIVILEGES  FOR  ROLE "theowner2"  IN SCHEMA "myschema" GRANT 
> >  select ON TABLES TO "myuser"
> > ...
> 
> 
> Do I really have to repeat the command for all users ?
> 
> The problem is I have many user able to create tables and all of them
> have to read each other. 

This is one setup that I can come up with:

CREATE ROLE tableowner NOINHERIT;
CREATE ROLE tablereader;
ALTER DEFAULT PRIVILEGES FOR ROLE tableowner IN SCHEMA myschema GRANT SELECT ON 
TABLES TO tablereader;

CREATE ROLE alice LOGIN IN ROLE tableowner, tablereader;
CREATE ROLE bob LOGIN IN ROLE tableowner, tablereader;

Now whenever "alice" has to create a table, she runs

SET ROLE tableowner;
CREATE TABLE myschema.newtable(x integer);
RESET ROLE;

Then all these tables belong to "tableowner", and each user in group 
"tablereader"
can SELECT from them:

\z myschema.newtable 
 Access privileges
  Schema  |   Name   | Type  |   Access privileges   | Column 
privileges | Policies 
--+--+---+---+---+--
 myschema | newtable | table | tableowner=arwdDxt/tableowner+|  
 | 
  |  |   | tablereader=r/tableowner  |  
 | 
(1 row)

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




libpq to JDBC adapter

2018-11-14 Thread Konstantin Knizhnik
If somebody is interested in connection to various JDBC-compatible 
databases through postgres_fdw,

please look at my pq2jdbc project: https://github.com/postgrespro/pq2jdbc
Details of the project are in README file.

If somebody can find some other use cases for libpq to JDBC adapter,
please let me know!

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company




Re: Default Privilege Table ANY ROLE

2018-11-14 Thread Nicolas Paris
On Wed, Nov 14, 2018 at 10:05:51AM -0600, Ron wrote:
> On 11/14/2018 08:19 AM, Nicolas Paris wrote:
> > Hi
> > 
> > I d'like my user be able to select on any new table from other users.
> > 
> Would ROLE Groups solve your problem?


Maybe yes, not sure what it is. I tested this:

> create role myrolegroup;
> ALTER DEFAULT PRIVILEGES  FOR  ROLE "myrolegroup"  IN SCHEMA "myschema" GRANT 
>  select ON TABLES TO "myuser";
> create myuser1 inherit  in role myrolegroup;
> create myuser2 inherit  in role myrolegroup;

But if myuser2 creates a table, then myuser1 cannot select on it. I
guess that's because inheritance only apply for GRANT (which is
distinct from DEFAUL PRIVILEGE).


Thanks,

-- 
nicolas



Re: TOAST : partial detoasting : only a small part of a toasted value (for pgpointcloud)

2018-11-14 Thread Rémi Cura
wow,
it was right under my nose.
Thank you very much !
Cheers,
Remi-C

Le mar. 13 nov. 2018 à 19:00, Tom Lane  a écrit :

> =?UTF-8?Q?R=C3=A9mi_Cura?=  writes:
> > So the pgpointcloud store sometimes very large groups of points into one
> > row (TOASTED), something along few kB to few MB. TOAST would be in
> EXTERNAL
> > mode (no compression).
> > Sometimes we only want to access a part of this data (one or several
> blocks
> > within the full data).
> >> From what I  understand, for the moment it requires to fetch all the
> data
> > from disk, then de-toast it, then select only the part of the data we are
> > interested in.
>
> > Yet I think it is possible to detoast only a subset of the data (and thus
> > fetch only some part of the data), considering that the [doc on toast](
> > https://www.postgresql.org/docs/11/storage-toast.html) says
> >> " Use of EXTERNAL will make substring operations on wide text and bytea
> > columns faster
> >> (at the penalty of increased storage space) because these operations are
> > optimized
> >> to fetch only the required parts of the out-of-line value when it is not
> > compressed. "
>
> > So my question is how does it work, how easy would it be to implement for
> > pgpointcloud?
>
> See PG_DETOAST_DATUM_SLICE and users of that macro.
>
> regards, tom lane
>


RE: pg_dump out of memory for large table with LOB

2018-11-14 Thread Jean-Marc Lessard
Adrien Nayrat wrote:
> With 17 million LO, it could eat lot of memory ;)
Yes it does.

I did several tests and here are my observations.

First memory settings are:
shared_buffers = 3GB
work_mem = 32Mb
maintenance_work_mem = 1GB
effective_cache_size = 9MB
bytea_output = 'escape'

The largest LO is 160KB for the dumped table
The largest LO is 20MB for the entire DB (fare from the 1GB)

I reduced the shared_buffers to 512Mb and have the same behavior.
I increased the pagefile from 8Gb to 16Gb and the dump processed. The total 
memory of pg_dump reached nearly 20GB.
I killed the dump after 200GB (20 hrs). It pages aggressively and would have 
last 4 days to dump 800GB.

Here is the memory pattern:
pg_dump: reading large objects-> Reads pg_largeobject_metadata. Ramp up 
gradually from 0 to 4GB during 10mins and jump to nearly 10GB at the end.
pg_dump: reading dependency data  -> drop to 7GB
pg_dump: saving encoding = UTF8
pg_dump: saving standard_conforming_strings = on-> begin paging and 
memory total reached 20Gb
pg_dump: dumping contents of table "ibisl1.signatures"  -> begin writing to 
dump file
pg_dump: saving large objects -> reading pg_largeobject file nodes and 
writing to dump file

I performed the same test on a smaller DB (60GB)
I dump a single table with LOBs and then the entire DB (3 tables have LOBs).
- Single 1GB table: 1GB including lobs, 80 000 rows, max LOB size 100KB
  pg_dump -h localhost -Fc -a -b -t signatures -v > d:\postgresql\sig.dmp
- Entire 60GB DB:   3 tables have LOBs, 240 000 LOBs, max LOB size 20MB
  pg_dump -h localhost -Fc -v > d:\postgresql\db.dmp
Both dumps used the same amount of memory (160MB) and dump file sizes are more 
or less the same 53Gb!
We can conclude that the single table dump includes the entire pg_largeobject 
table, not only the LOBs respective to the table.
So why reading the large objects to build a huge structure in memory if all 
large objects are going to be dump.
There is certainly something to rework.

I run another test. I dump the application schema that contains the data.
  pg_dump -h localhost -Fc -n ibis* -v > d:\postgresql\ibislx.dmp
The pg_dump used a few MB, so the memory issue is clearly due to LOB.

The relation between the number of large objects and dump memory looks linear ~ 
650bytes/LOB
Small DB:240 000 lobs = 0.160GB
Large DB: 28 500 000 lobs = 19.5GB
Seems a large memory allocation for processing LOB later on.

Another area where LOB hurts is the storage. LOB are broken and stored in 2K 
pieces.
Due to the block header, only three 2k pieces fit in an 8k block wasting 25% of 
space (in fact pgstattuple reports ~ 20%).

Would you recommend bytea over LOB considering that the max LOB size is well 
bellow 1GB?
Are bytea preferable in terms of support by the community, performance, 
feature, etc?

We choose saving binary data into LOB over bytea because LOB are stored out of 
row, and more particularly one table is frequently accessed, but not the LOB.
However, bytea would be toasted and stored out of row anyway as the binary data 
saved is bigger that 2k.
Currently the frequently accessed table contains 40 rows per 8k block (avg row 
length 200 bytes).
Can we just set the toast_tuple_target to 256 bytes for that table to get 
similar results?

Does toasted bytea will be stored in 2K pieces and 25% of space (or less if 
toast_tuple_target is changed) be wasted like for LOB?

I also presume that when a row (columns other that the bytea) is updated, a new 
copy of the bytea will be made in the toast table?
So LOB will be more suitable for frequently updated table where the binary data 
rarely change.


Jean-Marc Lessard
Administrateur de base de données / Database Administrator
Ultra Electronics Forensic Technology Inc.
T +1 514 489 4247 x4164
www.ultra-forensictechnology.com


Re: Java UnsatisfiedLinkError exception when connecting to Postgresql database

2018-11-14 Thread Adrian Klaver

On 11/14/18 9:25 AM, dcl...@cinci.rr.com wrote:

Hello;

I've written a Java program which uses Postgresql via JDBC.  The program works 
fine on all RedHat systems I've tested except one, where it yields an 
UnsatisifiedLinkError.  Here is the stack trace:

sun.misc.VM.latestUserDefinedLoader0(Native
Method)
sun.misc.VM.latestUserDefinedLoader(VM.java:411)
java.io.ObjectInputStream.latestUserDefinedLoader(ObjectInputStream.java:2351)
java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:686)
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1866)
java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1749)
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2040)
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
org.postgresql.ds.common.BaseDataSource.readBaseObject(BaseDataSource.java:1210)
org.postgresql.ds.common.BaseDataSource.initializeFrom(BaseDataSource.java:1220)
org.postgresql.ds.PGPoolingDataSource.initialize(PGPoolingDataSource.java:267)
org.postgresql.ds.PGPoolingDataSource.getConnection(PGPoolingDataSource.java:324)

Any ideas?


What is different about the system that throws the error?

For example:

OS version
JDBC version
Postgres version
Java version



Thanks in advance.




--
Adrian Klaver
adrian.kla...@aklaver.com



Java UnsatisfiedLinkError exception when connecting to Postgresql database

2018-11-14 Thread dclark
Hello;

I've written a Java program which uses Postgresql via JDBC.  The program works 
fine on all RedHat systems I've tested except one, where it yields an 
UnsatisifiedLinkError.  Here is the stack trace:

sun.misc.VM.latestUserDefinedLoader0(Native
Method)
sun.misc.VM.latestUserDefinedLoader(VM.java:411)
java.io.ObjectInputStream.latestUserDefinedLoader(ObjectInputStream.java:2351)
java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:686)
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1866)
java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1749)
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2040)
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
org.postgresql.ds.common.BaseDataSource.readBaseObject(BaseDataSource.java:1210)
org.postgresql.ds.common.BaseDataSource.initializeFrom(BaseDataSource.java:1220)
org.postgresql.ds.PGPoolingDataSource.initialize(PGPoolingDataSource.java:267)
org.postgresql.ds.PGPoolingDataSource.getConnection(PGPoolingDataSource.java:324)

Any ideas?

Thanks in advance.



<-> Operator on Trigram Index

2018-11-14 Thread Jeffrey Kamei
I'm trying to get the <-> operator to recognize a trigram index (GIST) I've
set on a table. Using `EXPLAIN VERBOSE` I can see the query engine ignoring
the trigram index when using the `<->` operator. However if I use the `%`
operator, the index is found and used. Can you explain why this is
happening? As far as I can tell from the documentation, the `<->` operator
should be using the index as well.


Re: Move cluster to new host, upgraded version [DONE]

2018-11-14 Thread Rich Shepard

On Wed, 14 Nov 2018, Rich Shepard wrote:


I think that I FUBARed the upgrade from 10.3 to 11.1. I've removed both
packages and am rebuilding 11.1. I'll install it, run initdb on it, then
re-run pg_dumpall on the 10.5 data directory on the old desktop. Starting
from scratch on this new desktop will save time for all of us.


  All fixed now. To complete the thread this is what I did:

  1) Removed Slackware packages for postgresql-10.3 and -11.1.
  2) Removed /var/lib/pgsql/10.3 and 11.1.
  3) Re-built postgresql-11.1 and re-installed it.
  4) As user 'postgres' ran initdb and pg_ctl start pointing to
/var/lib/pgsql/11/data.
  5) Edited postgresql.conf and pg_hba.conf to accept connections from all
hosts on the LAN.
  6) Re-started postgres.
  7) As 'postgre' ran
pg_dumpall -h salmo -c -f pg-all-2018-11-14.sql
  8) Then postgres ran
psql -f pg-all-2018-11-14.sql

  Now I, as a user, can access my databases without a password.

  Thank you, Adrian, for your patient help. And you, Tom, for your helpful
comment.

Best regards,

Rich



Re: Default Privilege Table ANY ROLE

2018-11-14 Thread Ron

On 11/14/2018 08:19 AM, Nicolas Paris wrote:

Hi

I d'like my user be able to select on any new table from other users.


ALTER DEFAULT PRIVILEGES  FOR  ROLE "theowner1"  IN SCHEMA "myschema" GRANT  select ON 
TABLES TO "myuser"
ALTER DEFAULT PRIVILEGES  FOR  ROLE "theowner2"  IN SCHEMA "myschema" GRANT  select ON 
TABLES TO "myuser"
...


Do I really have to repeat the command for all users ?

The problem is I have many user able to create tables and all of them
have to read each other.


Would ROLE Groups solve your problem?


--
Angular momentum makes the world go 'round.



Re: Move cluster to new host, upgraded version

2018-11-14 Thread Rich Shepard

On Wed, 14 Nov 2018, Tom Lane wrote:


Perhaps those are associated with some other data directory?


Tom/Adrian,

  I think that I FUBARed the upgrade from 10.3 to 11.1. I've removed both
packages and am rebuilding 11.1. I'll install it, run initdb on it, then
re-run pg_dumpall on the 10.5 data directory on the old desktop. Starting
from scratch on this new desktop will save time for all of us.

Thanks,

Rich



Re: Move cluster to new host, upgraded version

2018-11-14 Thread Tom Lane
Rich Shepard  writes:
>ps ax | grep postgres
> shows a number of processes, but psql tells me there's no server running,
> and there is no postmaster.opts or postmaster.pid in the data directory.

Perhaps those are associated with some other data directory?

You could try using lsof on one of them to see what its current working
directory is (or on Linux, examine /proc/NN/cwd).  lsof on whichever
one(s) is/are postmasters would also tell you what sockets they're
listening on.

regards, tom lane



Re: Move cluster to new host, upgraded version

2018-11-14 Thread Adrian Klaver

On 11/14/18 6:58 AM, Rich Shepard wrote:

On Tue, 13 Nov 2018, Adrian Klaver wrote:


Just realized the question I should have asked is:
How did you get the pg_dumpall file processed by Postgres?
In other words how did you do it without a password?


   As user postgres I entered the command
$ psql -f dump-all.sql

   In any case, I need to back up because I missed something when
initializing the cluster.

   ps ax | grep postgres

shows a number of processes, but psql tells me there's no server running,
and there is no postmaster.opts or postmaster.pid in the data directory.

   I can delete contents of the data directory and re-initdb, or re-install
the application and start from scratch.


It was running when you did this:

psql -f dump-all.sql

correct?

Seems to me it is a start up script issue.

Have you rebooted the computer since the last time Postgres ran?

Is there a startup script in init.d/ or where ever your scripts are?

Can you start the server manually using pg_ctl?



Advice appreciated,

Rich





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Move cluster to new host, upgraded version

2018-11-14 Thread Rich Shepard

On Tue, 13 Nov 2018, Adrian Klaver wrote:


Just realized the question I should have asked is:
How did you get the pg_dumpall file processed by Postgres?
In other words how did you do it without a password?


  As user postgres I entered the command
$ psql -f dump-all.sql

  In any case, I need to back up because I missed something when
initializing the cluster.

  ps ax | grep postgres

shows a number of processes, but psql tells me there's no server running,
and there is no postmaster.opts or postmaster.pid in the data directory.

  I can delete contents of the data directory and re-initdb, or re-install
the application and start from scratch.

Advice appreciated,

Rich



Default Privilege Table ANY ROLE

2018-11-14 Thread Nicolas Paris
Hi

I d'like my user be able to select on any new table from other users.

> ALTER DEFAULT PRIVILEGES  FOR  ROLE "theowner1"  IN SCHEMA "myschema" GRANT  
> select ON TABLES TO "myuser"
> ALTER DEFAULT PRIVILEGES  FOR  ROLE "theowner2"  IN SCHEMA "myschema" GRANT  
> select ON TABLES TO "myuser"
> ...


Do I really have to repeat the command for all users ?

The problem is I have many user able to create tables and all of them
have to read each other. 

Thanks



-- 
nicolas



Re: Impact on PostgreSQL due to Redhat acquisition by IBM

2018-11-14 Thread Adrian Klaver

On 11/13/18 11:04 PM, Sachin Kotwal wrote:

Hi PostgreSQL lovers,

I heard news that Redhat is going to acquired by IBM. IBM has its on 
database. And they have history of selling applications with their own 
hardware (software and hardware together).


As per my knowledge PostgreSQL community has better support for Redhat 
family than any other platforms.


Not seeing it:

https://www.postgresql.org/download/



Does community is going to support/focus more Debian platforms ?
Does community has any plan to switch their main supported platform?


There is no main supported platform. There is the source and there are 
packaging built off the source. The packaging is done by a diverse group 
of people. So for example the RPMS:


https://yum.postgresql.org/contact.php

Devrim Gündüz   EDB

Jeff Frost  PGX

Craig Ringer2ndQuadrant



Please share if any other plan.


--

Thanks and Regards,
Sachin Kotwal



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Impact on PostgreSQL due to Redhat acquisition by IBM

2018-11-14 Thread legrand legrand
Ravi Krishna-9 wrote
> Well your information needs some update.
> 
> - On AIX, IBM had no issues selling Oracle, a rival to DB2.
> - IBM Global Services, a consulting unit was the single biggest sales
> force for Oracle Installations outside
>   Oracle. In other words, they ended up using Oracle for projects done by
> IGM-GS more than DB2.
>   This was true some 10 yrs ago. Oracle use to point it out , with glee.
> - Many years ago I was informed that Enterprise DB was funded by IBM. 
> Don't know how much it is true.

There (is)was a IBM DB2 Oracle compatibily feature, and IBM was interested
by EDB Oracle compatibility one:
https://www.enterprisedb.com/fr/news/enterprisedb-and-ibm%C2%AE-collaborate-integrate-technology-new-version-db2

But IBM didn't bought EDB and it that will make it even more difficult for
Oracle ;o)

Regards
PAscal  



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Impact on PostgreSQL due to Redhat acquisition by IBM

2018-11-14 Thread Ravi Krishna
Well your information needs some update.

- On AIX, IBM had no issues selling Oracle, a rival to DB2.
- IBM Global Services, a consulting unit was the single biggest sales force for 
Oracle Installations outside
  Oracle. In other words, they ended up using Oracle for projects done by 
IGM-GS more than DB2.
  This was true some 10 yrs ago. Oracle use to point it out , with glee.
- Many years ago I was informed that Enterprise DB was funded by IBM.  Don't 
know how much it is true.


Re: GIN Index for low cardinality

2018-11-14 Thread Олег Самойлов
Eh, I checked, you are right. Something terribly wrong with hash index in 
PostgreSQL. But there are another hash index gin(jsonb_path_ops), may be 
correctly say gin+hash index. Looked like it is the best for this purpose.

> 26 окт. 2018 г., в 19:27, Jeff Janes  написал(а):
> 
> On Thu, Oct 25, 2018 at 9:36 AM Олег Самойлов  > wrote:
> 
>> 17 окт. 2018 г., в 13:46, Ravi Krishna > > написал(а):
>> 
>> In  
>> https://www.cybertec-postgresql.com/en/ideas-for-scaling-postgresql-to-multi-terabyte-and-beyond/
>>  
>> 
>> 
>> it is mentioned:
>> 
>> "GIN, the most know non-default index type perhaps, has been actually around 
>> for ages (full-text search) and in short is perfect for indexing columns 
>> where there are lot of repeating values – think all kinds of statuses or 
>> good old Mr/Mrs/Miss. GIN only stores every unique column value only once as 
>> for the default B-tree you’ll have e.g. 1 millon leaf nodes with the integer 
>> “1” in it."
>> 
>> 
>> Does it mean that GIN is a very good choice for low cardinality columns.  
> 
> Not necessary. There is other index which also don’t keep column value in an 
> every leaf. Hash, for instance. 
> 
> For smallish values (which low cardinality columns tend to be) the per-tuple 
> overhead and the pointer itself is probably much larger than the value, so 
> hash won't save you much if any space.  The GIN index removes not just the 
> value, but the per-tuple overhead.  And also compresses the point list to 
> further save space.  
> 
> Here is a real-world example from one of my databases where each value is 
> about 17 characters long, and is present about 20 times:
> 
> gin: 411 MB
> btree: 2167 MB 
> hash: 2159 MB
> 
> Cheers,
> 
> Jeff



Re: WTF with hash index?

2018-11-14 Thread Олег Самойлов
Ah, thanks. I am not a developer of PostgreSQL. I am a developer in PostgreSQL. 
:) And I see two hash indexes on the same data and one of them 43 times bigger 
then other, this looked like something terribly wrong. Just free idea how to 
considerably improve your product.

> 13 нояб. 2018 г., в 22:37, Alvaro Herrera  
> написал(а):
> 
> On 2018-Nov-13, Олег Самойлов wrote:
> 
>> Very much better. What about to copy paste algorithm from
>> gin(jsonb_path_ops) to the hash index?
> 
> You're welcome to submit patches.
> 
> -- 
> Álvaro Herrerahttps://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services