Re: Emitting JSON to file using COPY TO

2023-11-27 Thread Filip Sedlák
This would be a very special case for COPY. It applies only to a single 
column of JSON values. The original problem can be solved with psql 
--tuples-only as David wrote earlier.



$ psql -tc 'select json_agg(row_to_json(t))
 from (select * from public.tbl_json_test) t;'

 [{"id":1,"t_test":"here's a \"string\""}]


Special-casing any encoding/escaping scheme leads to bugs and harder 
parsing.


Just my 2c.

--
Filip Sedlák




Re: Installing extension temporal_tables for PG13

2023-11-27 Thread Adrian Klaver

On 11/27/23 18:18, H wrote:

On November 27, 2023 8:48:35 PM GMT-05:00, Adrian Klaver 
 wrote:

On 11/27/23 17:41, H wrote:

On 11/27/2023 08:38 PM, Adrian Klaver wrote:

On 11/27/23 17:17, H wrote:



I don't use PGXN so I am flying blind here. You may need to do:

sudo pgxn install temporal_tables


I ran pgxn install temporal_tables as root so that should not be the problem.



On an old machine running openSUSE  I did:

sudo pgxn install --pg_config=/usr/local/pgsql15/bin/pg_config 
temporal_tables



Where this

 l /usr/local/pgsql15/lib64/pgxs/src/makefiles/pgxs.mk
-rw-r--r-- 1 root root 14768 Sep 25 09:23 
/usr/local/pgsql15/lib64/pgxs/src/makefiles/pgxs.mk


was already installed. I presume as part of the source build I did for 
Postgres 15.


Maybe the Centos packaging has a separate package you need to install to 
get the PGXS infrastructure?





--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Installing extension temporal_tables for PG13

2023-11-27 Thread H
On November 27, 2023 8:48:35 PM GMT-05:00, Adrian Klaver 
 wrote:
>On 11/27/23 17:41, H wrote:
>> On 11/27/2023 08:38 PM, Adrian Klaver wrote:
>>> On 11/27/23 17:17, H wrote:
>
 I have both make and gmake installed on the system but the issue
>seems to be that the file pgxs.mk does not exist? In fact, I can see
>that /usr/pgsql-13/lib/pgxs directory does not exist.

 Suggestions?
>>>
>>> What does running
>>>
>>>     pg_config
>>>
>>> at the command line return?
>>>



>>>
>> BINDIR = /usr/pgsql-13/bin
>
>
>I don't use PGXN so I am flying blind here. You may need to do:
>
>sudo pgxn install temporal_tables

I ran pgxn install temporal_tables as root so that should not be the problem.




Re: Installing extension temporal_tables for PG13

2023-11-27 Thread Adrian Klaver

On 11/27/23 17:41, H wrote:

On 11/27/2023 08:38 PM, Adrian Klaver wrote:

On 11/27/23 17:17, H wrote:



I have both make and gmake installed on the system but the issue seems to be 
that the file pgxs.mk does not exist? In fact, I can see that 
/usr/pgsql-13/lib/pgxs directory does not exist.

Suggestions?


What does running

    pg_config

at the command line return?








BINDIR = /usr/pgsql-13/bin



I don't use PGXN so I am flying blind here. You may need to do:

sudo pgxn install temporal_tables

--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Installing extension temporal_tables for PG13

2023-11-27 Thread H
On 11/27/2023 08:38 PM, Adrian Klaver wrote:
> On 11/27/23 17:17, H wrote:
>> On 11/27/2023 07:26 PM, Adrian Klaver wrote:
>>> On 11/27/23 16:18, H wrote:
>>>
 The link to the pgxn client on the arkhipov's page leads to a 
 non-functioning website: pgxnclient.projects.pgfoundry.org.

 Does anyone if the pgxn client can be found somewhere else?

>>>
>>> Or:
>>>
>>> https://pgxn.github.io/pgxnclient/
>>>
>> Downloaded pgxn and first running:
>>
>> pgxn install temporal_tables
>>
>> but then running into another problem:
>>
>> GNUmakefile:25: /usr/pgsql-13/lib/pgxs/src/makefiles/pgxs.mk: No such file 
>> or directory
>> gmake: *** No rule to make target 
>> `/usr/pgsql-13/lib/pgxs/src/makefiles/pgxs.mk'.  Stop.
>> ERROR: command returned 2: gmake PG_CONFIG=/usr/pgsql-13/bin/pg_config all
>>
>> I have both make and gmake installed on the system but the issue seems to be 
>> that the file pgxs.mk does not exist? In fact, I can see that 
>> /usr/pgsql-13/lib/pgxs directory does not exist.
>>
>> Suggestions?
>
> What does running
>
>    pg_config
>
> at the command line return?
>
>>
>>
>>
>
BINDIR = /usr/pgsql-13/bin
DOCDIR = /usr/pgsql-13/doc
HTMLDIR = /usr/pgsql-13/doc/html
INCLUDEDIR = /usr/pgsql-13/include
PKGINCLUDEDIR = /usr/pgsql-13/include
INCLUDEDIR-SERVER = /usr/pgsql-13/include/server
LIBDIR = /usr/pgsql-13/lib
PKGLIBDIR = /usr/pgsql-13/lib
LOCALEDIR = /usr/pgsql-13/share/locale
MANDIR = /usr/pgsql-13/share/man
SHAREDIR = /usr/pgsql-13/share
SYSCONFDIR = /etc/sysconfig/pgsql
PGXS = /usr/pgsql-13/lib/pgxs/src/makefiles/pgxs.mk
CONFIGURE =  '--enable-rpath' '--prefix=/usr/pgsql-13' 
'--includedir=/usr/pgsql-13/include' '--mandir=/usr/pgsql-13/share/man' 
'--datadir=/usr/pgsql-13/share' '--libdir=/usr/pgsql-13/lib' 
'--enable-tap-tests' '--with-icu' '--with-llvm' '--with-perl' '--with-python' 
'--with-tcl' '--with-tclconfig=/usr/lib64' '--with-openssl' '--with-pam' 
'--with-gssapi' '--with-includes=/usr/include' '--with-libraries=/usr/lib64' 
'--enable-nls' '--enable-dtrace' '--with-uuid=e2fs' '--with-libxml' 
'--with-libxslt' '--with-ldap' '--with-selinux' '--with-systemd' 
'--with-system-tzdata=/usr/share/zoneinfo' '--sysconfdir=/etc/sysconfig/pgsql' 
'--docdir=/usr/pgsql-13/doc' '--htmldir=/usr/pgsql-13/doc/html' 'CFLAGS=-O2 -g 
-pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong 
--param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' 
'LDFLAGS=-Wl,--as-needed' 'LLVM_CONFIG=/usr/lib64/llvm5.0/bin/llvm-config' 
'CLANG=/opt/rh/llvm-toolset-7/root/usr/bin/clang'
'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig' 
'PYTHON=/usr/bin/python3'
CC = gcc -std=gnu99
CPPFLAGS = -D_GNU_SOURCE -I/usr/include/libxml2 -I/usr/include
CFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith 
-Wdeclaration-after-statement -Werror=vla -Wendif-labels 
-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv 
-fexcess-precision=standard -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 
-fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 
-grecord-gcc-switches -m64 -mtune=generic
CFLAGS_SL = -fPIC
LDFLAGS = -Wl,--as-needed -L/usr/lib64/llvm5.0/lib -L/usr/lib64 -Wl,--as-needed 
-Wl,-rpath,'/usr/pgsql-13/lib',--enable-new-dtags
LDFLAGS_EX =
LDFLAGS_SL =
LIBS = -lpgcommon -lpgport -lpthread -lselinux -lxslt -lxml2 -lpam -lssl 
-lcrypto -lgssapi_krb5 -lz -lreadline -lrt -ldl -lm
VERSION = PostgreSQL 13.13





Re: Installing extension temporal_tables for PG13

2023-11-27 Thread Adrian Klaver

On 11/27/23 17:17, H wrote:

On 11/27/2023 07:26 PM, Adrian Klaver wrote:

On 11/27/23 16:18, H wrote:


The link to the pgxn client on the arkhipov's page leads to a non-functioning 
website: pgxnclient.projects.pgfoundry.org.

Does anyone if the pgxn client can be found somewhere else?



Or:

https://pgxn.github.io/pgxnclient/


Downloaded pgxn and first running:

pgxn install temporal_tables

but then running into another problem:

GNUmakefile:25: /usr/pgsql-13/lib/pgxs/src/makefiles/pgxs.mk: No such file or 
directory
gmake: *** No rule to make target 
`/usr/pgsql-13/lib/pgxs/src/makefiles/pgxs.mk'.  Stop.
ERROR: command returned 2: gmake PG_CONFIG=/usr/pgsql-13/bin/pg_config all

I have both make and gmake installed on the system but the issue seems to be 
that the file pgxs.mk does not exist? In fact, I can see that 
/usr/pgsql-13/lib/pgxs directory does not exist.

Suggestions?


What does running

   pg_config

at the command line return?







--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Installing extension temporal_tables for PG13

2023-11-27 Thread H
On 11/27/2023 07:26 PM, Adrian Klaver wrote:
> On 11/27/23 16:18, H wrote:
>
>> The link to the pgxn client on the arkhipov's page leads to a 
>> non-functioning website: pgxnclient.projects.pgfoundry.org.
>>
>> Does anyone if the pgxn client can be found somewhere else?
>>
>
> Or:
>
> https://pgxn.github.io/pgxnclient/
>
Downloaded pgxn and first running:

pgxn install temporal_tables

but then running into another problem:

GNUmakefile:25: /usr/pgsql-13/lib/pgxs/src/makefiles/pgxs.mk: No such file or 
directory
gmake: *** No rule to make target 
`/usr/pgsql-13/lib/pgxs/src/makefiles/pgxs.mk'.  Stop.
ERROR: command returned 2: gmake PG_CONFIG=/usr/pgsql-13/bin/pg_config all

I have both make and gmake installed on the system but the issue seems to be 
that the file pgxs.mk does not exist? In fact, I can see that 
/usr/pgsql-13/lib/pgxs directory does not exist.

Suggestions?





Re: IPV6 issue

2023-11-27 Thread Adrian Klaver

On 11/27/23 16:58, Atul Kumar wrote:

I Don't know how postgres was installed,


1) Someone installed it. Ask around on where it came from.

2) Query the package manager to see if it was installed that way?




How do I check if I have more than one version of psql installed ?


A quick and dirty way to see what you are using:

   whereis psql

To find all the versions:

   sudo find / -name  psql




Regards.

On Tue, Nov 28, 2023 at 6:26 AM Adrian Klaver > wrote:


On 11/27/23 16:42, Atul Kumar wrote:
 > Hi,
 >
 > unix_socket_directories is set to default i.e. /tmp and I could
see the
 > socket in /tmp directory.

You have not answered:

     How did you install Postgres?

     Do you have more then one version of psql installed?


Though I am pretty sure I know the answer to the second question.


 >
 >
 > Regards.
 >
 >
 >
 >
 > On Tue, Nov 28, 2023 at 2:11 AM Tom Lane mailto:t...@sss.pgh.pa.us>
 > >> wrote:
 >
 >     Adrian Klaver mailto:adrian.kla...@aklaver.com>
 >     >> writes:
 >      > On 11/27/23 12:11, Atul Kumar wrote:
 >      >> I found that localhost was set to .bash_profile and when I
 >     removed it
 >      >> and then re-attempted to connected the database using "psql
 >     postgres", I
 >      >> got this new error:
 >      >>
 >      >> psql postgres -p 5432
 >      >> psql: error: could not connect to server: No such file or
directory
 >      >>         Is the server running locally and accepting
 >      >>         connections on Unix domain socket
 >      >> "/var/run/postgresql/.s.PGSQL.5432"?
 >
 >      > Do you have more then one version of psql installed?
 >
 >     Yeah, that.  You're apparently using a version of psql/libpq that
 >     thinks the default Unix socket location is /var/run/postgresql;
 >     but the postmaster you are using did not create a socket there.
 >     (Probably it put one in /tmp instead, which is the out-of-the-box
 >     default location.  But some distros consider that insecure so
they
 >     override it, typically to /var/run/postgresql/.)
 >
 >     The easiest workaround if you have a mishmash of Postgres
libraries
 >     is to tell the postmaster to create sockets in both places.
 >     See "unix_socket_directories" parameter.
 >
 >                              regards, tom lane
 >

-- 
Adrian Klaver

adrian.kla...@aklaver.com 



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: IPV6 issue

2023-11-27 Thread Atul Kumar
I Don't know how postgres was installed,

How do I check if I have more than one version of psql installed ?


Regards.

On Tue, Nov 28, 2023 at 6:26 AM Adrian Klaver 
wrote:

> On 11/27/23 16:42, Atul Kumar wrote:
> > Hi,
> >
> > unix_socket_directories is set to default i.e. /tmp and I could see the
> > socket in /tmp directory.
>
> You have not answered:
>
> How did you install Postgres?
>
> Do you have more then one version of psql installed?
>
>
> Though I am pretty sure I know the answer to the second question.
>
>
> >
> >
> > Regards.
> >
> >
> >
> >
> > On Tue, Nov 28, 2023 at 2:11 AM Tom Lane  > > wrote:
> >
> > Adrian Klaver  > > writes:
> >  > On 11/27/23 12:11, Atul Kumar wrote:
> >  >> I found that localhost was set to .bash_profile and when I
> > removed it
> >  >> and then re-attempted to connected the database using "psql
> > postgres", I
> >  >> got this new error:
> >  >>
> >  >> psql postgres -p 5432
> >  >> psql: error: could not connect to server: No such file or
> directory
> >  >> Is the server running locally and accepting
> >  >> connections on Unix domain socket
> >  >> "/var/run/postgresql/.s.PGSQL.5432"?
> >
> >  > Do you have more then one version of psql installed?
> >
> > Yeah, that.  You're apparently using a version of psql/libpq that
> > thinks the default Unix socket location is /var/run/postgresql;
> > but the postmaster you are using did not create a socket there.
> > (Probably it put one in /tmp instead, which is the out-of-the-box
> > default location.  But some distros consider that insecure so they
> > override it, typically to /var/run/postgresql/.)
> >
> > The easiest workaround if you have a mishmash of Postgres libraries
> > is to tell the postmaster to create sockets in both places.
> > See "unix_socket_directories" parameter.
> >
> >  regards, tom lane
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: IPV6 issue

2023-11-27 Thread Adrian Klaver

On 11/27/23 16:42, Atul Kumar wrote:

Hi,

unix_socket_directories is set to default i.e. /tmp and I could see the 
socket in /tmp directory.


You have not answered:

   How did you install Postgres?

   Do you have more then one version of psql installed?


Though I am pretty sure I know the answer to the second question.





Regards.




On Tue, Nov 28, 2023 at 2:11 AM Tom Lane > wrote:


Adrian Klaver mailto:adrian.kla...@aklaver.com>> writes:
 > On 11/27/23 12:11, Atul Kumar wrote:
 >> I found that localhost was set to .bash_profile and when I
removed it
 >> and then re-attempted to connected the database using "psql
postgres", I
 >> got this new error:
 >>
 >> psql postgres -p 5432
 >> psql: error: could not connect to server: No such file or directory
 >>         Is the server running locally and accepting
 >>         connections on Unix domain socket
 >> "/var/run/postgresql/.s.PGSQL.5432"?

 > Do you have more then one version of psql installed?

Yeah, that.  You're apparently using a version of psql/libpq that
thinks the default Unix socket location is /var/run/postgresql;
but the postmaster you are using did not create a socket there.
(Probably it put one in /tmp instead, which is the out-of-the-box
default location.  But some distros consider that insecure so they
override it, typically to /var/run/postgresql/.)

The easiest workaround if you have a mishmash of Postgres libraries
is to tell the postmaster to create sockets in both places.
See "unix_socket_directories" parameter.

                         regards, tom lane



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: IPV6 issue

2023-11-27 Thread Atul Kumar
Hi,

unix_socket_directories is set to default i.e. /tmp and I could see the
socket in /tmp directory.


Regards.




On Tue, Nov 28, 2023 at 2:11 AM Tom Lane  wrote:

> Adrian Klaver  writes:
> > On 11/27/23 12:11, Atul Kumar wrote:
> >> I found that localhost was set to .bash_profile and when I removed it
> >> and then re-attempted to connected the database using "psql postgres",
> I
> >> got this new error:
> >>
> >> psql postgres -p 5432
> >> psql: error: could not connect to server: No such file or directory
> >> Is the server running locally and accepting
> >> connections on Unix domain socket
> >> "/var/run/postgresql/.s.PGSQL.5432"?
>
> > Do you have more then one version of psql installed?
>
> Yeah, that.  You're apparently using a version of psql/libpq that
> thinks the default Unix socket location is /var/run/postgresql;
> but the postmaster you are using did not create a socket there.
> (Probably it put one in /tmp instead, which is the out-of-the-box
> default location.  But some distros consider that insecure so they
> override it, typically to /var/run/postgresql/.)
>
> The easiest workaround if you have a mishmash of Postgres libraries
> is to tell the postmaster to create sockets in both places.
> See "unix_socket_directories" parameter.
>
> regards, tom lane
>


Re: IPV6 issue

2023-11-27 Thread Atul Kumar
listen_address is set to '*',
version=  psql (12.15, server 12.15.19)


Regards.

On Tue, Nov 28, 2023 at 2:01 AM Adrian Klaver 
wrote:

> On 11/27/23 12:11, Atul Kumar wrote:
> > Hi,
> >
> > I found that localhost was set to .bash_profile and when I removed it
> > and then re-attempted to connected the database using "psql postgres", I
> > got this new error:
> >
> > psql postgres -p 5432
> > psql: error: could not connect to server: No such file or directory
> >  Is the server running locally and accepting
> >  connections on Unix domain socket
> > "/var/run/postgresql/.s.PGSQL.5432"?
> >
> > So DO I need to restart the postgres service or is there any other
> > workaround?
> >
> >
>
> In postgresql.conf or any include *.conf it points to does
>
> port = 5432
>
> and what is
>
> listen_addresses
>
> set to?
>
> How did you install Postgres?
>
> Do you have more then one version of psql installed?
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: malloc errors in psql with libpq 16.1 on Mac

2023-11-27 Thread Tom Lane
Don Seiler  writes:
> On Mon, Nov 27, 2023 at 6:02 PM Tom Lane  wrote:
>> Is OpenSSL in use?  If so, what version?  I'm wondering if this is a
>> variant of the OpenSSL 3.2.0 compatibility problem we just heard of:

> My colleague does confirm it is OpenSSL 3.2.0:
> ==> openssl@3: stable 3.2.0 (bottled)

Ugh.

> Does this mean that downgrading libpq wouldn't help? He'd have to downgrade
> openssl instead (or wait for a fix from somewhere upstream)?

Right.

regards, tom lane




Re: malloc errors in psql with libpq 16.1 on Mac

2023-11-27 Thread Don Seiler
On Mon, Nov 27, 2023 at 6:02 PM Tom Lane  wrote:

>
> Is OpenSSL in use?  If so, what version?  I'm wondering if this is a
> variant of the OpenSSL 3.2.0 compatibility problem we just heard of:
>

My colleague does confirm it is OpenSSL 3.2.0:

==> openssl@3: stable 3.2.0 (bottled)

Does this mean that downgrading libpq wouldn't help? He'd have to downgrade
openssl instead (or wait for a fix from somewhere upstream)?

Don.
-- 
Don Seiler
www.seiler.us


Re: Installing extension temporal_tables for PG13

2023-11-27 Thread Adrian Klaver

On 11/27/23 16:18, H wrote:


The link to the pgxn client on the arkhipov's page leads to a non-functioning 
website: pgxnclient.projects.pgfoundry.org.

Does anyone if the pgxn client can be found somewhere else?



Or:

https://pgxn.github.io/pgxnclient/

--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Installing extension temporal_tables for PG13

2023-11-27 Thread Adrian Klaver

On 11/27/23 16:18, H wrote:

On November 27, 2023 6:58:40 PM GMT-05:00, Adrian Klaver 
 wrote:








The link to the pgxn client on the arkhipov's page leads to a non-functioning 
website: pgxnclient.projects.pgfoundry.org.

Does anyone if the pgxn client can be found somewhere else?



https://pgxn.org/faq/

PGXN Client

Is there a command-line client for installing extensions from PGXN?
There is! Install it with this command:

sudo easy_install pgxnclient

Then you can install PGXN extensions with a simple command:

pgxn install pair

Run pgxn help to get a list of supported commands; or check out the 
introductory blog, the complete documentation or the source code.


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Installing extension temporal_tables for PG13

2023-11-27 Thread H
On November 27, 2023 6:58:40 PM GMT-05:00, Adrian Klaver 
 wrote:
>On 11/27/23 15:37, H wrote:
>> Running Centos7 and PG13. I'd like to install the temporal_tables
>extension but it does not seem to exist in postgresql13-contrib.
>
>If it is not here:
>
>https://www.postgresql.org/docs/13/contrib.html
>
>then it won't be in postgresql13-contrib.
>
>> 
>> I did find the source code at
>https://github.com/arkhipov3/temporal_tables which suggests to use the
>pgxn client to install it. The link to this client found on this page -
>pgxnclient.projects.pgfoundry.org - however, is non-functional.
>
>The link above did not work for me, the below did:
>
>https://github.com/arkhipov/temporal_tables
>
>> 
>> Am I correct in that I need to either find the client pgxn
>alternatively install the development environment and compile the
>source code for this extension?
>
>Yes.
>
>> 
>> Thank you.
>> 
>> 
>> 

The link to the pgxn client on the arkhipov's page leads to a non-functioning 
website: pgxnclient.projects.pgfoundry.org.

Does anyone if the pgxn client can be found somewhere else?




Re: malloc errors in psql with libpq 16.1 on Mac

2023-11-27 Thread Tom Lane
Don Seiler  writes:
> I have a colleague trying to use psql to connect to a remote DB, but he's
> getting this error:
> FATAL: no PostgreSQL user name specified in startup packet
> psql(42705,0x7ff84a07b700) malloc: *** error for object 0xa6: pointer being
> freed was not allocated
> psql(42705,0x7ff84a07b700) malloc: *** set a breakpoint in
> malloc_error_break to debug

Is OpenSSL in use?  If so, what version?  I'm wondering if this is a
variant of the OpenSSL 3.2.0 compatibility problem we just heard of:

https://www.postgresql.org/message-id/flat/CAN55FZ1eDDYsYaL7mv%2BoSLUij2h_u6hvD4Qmv-7PK7jkji0uyQ%40mail.gmail.com

regards, tom lane




Re: Installing extension temporal_tables for PG13

2023-11-27 Thread Adrian Klaver

On 11/27/23 15:37, H wrote:

Running Centos7 and PG13. I'd like to install the temporal_tables extension but 
it does not seem to exist in postgresql13-contrib.


If it is not here:

https://www.postgresql.org/docs/13/contrib.html

then it won't be in postgresql13-contrib.



I did find the source code at https://github.com/arkhipov3/temporal_tables 
which suggests to use the pgxn client to install it. The link to this client 
found on this page - pgxnclient.projects.pgfoundry.org - however, is 
non-functional.


The link above did not work for me, the below did:

https://github.com/arkhipov/temporal_tables



Am I correct in that I need to either find the client pgxn alternatively 
install the development environment and compile the source code for this 
extension?


Yes.



Thank you.





--
Adrian Klaver
adrian.kla...@aklaver.com





malloc errors in psql with libpq 16.1 on Mac

2023-11-27 Thread Don Seiler
I have a colleague trying to use psql to connect to a remote DB, but he's
getting this error:

FATAL: no PostgreSQL user name specified in startup packet

psql(42705,0x7ff84a07b700) malloc: *** error for object 0xa6: pointer being
freed was not allocated
psql(42705,0x7ff84a07b700) malloc: *** set a breakpoint in
malloc_error_break to debug

He was using psql 14 and we upgraded him to psql 15 but no change. I
noticed he was on libpq 16.1 (installed via homebrew). Another colleague
who was able to connect fine was on libpq 15.3. He upgraded to libpq 16.1
and boom he now gets the same error.

One colleague is on an older amd64 Mac, the other is on a newer arm64 Mac.
They are working to try to pin the older version of libpq but this is
probably going to bust out to a larger group as other homebrew users run
their routine updates.

Just wondering if anyone else has had similar experiences or if this is a
known issue?

-- 
Don Seiler
www.seiler.us


Installing extension temporal_tables for PG13

2023-11-27 Thread H
Running Centos7 and PG13. I'd like to install the temporal_tables extension but 
it does not seem to exist in postgresql13-contrib.

I did find the source code at https://github.com/arkhipov3/temporal_tables 
which suggests to use the pgxn client to install it. The link to this client 
found on this page - pgxnclient.projects.pgfoundry.org - however, is 
non-functional.

Am I correct in that I need to either find the client pgxn alternatively 
install the development environment and compile the source code for this 
extension?

Thank you.





Dynamically generate a nested json

2023-11-27 Thread Rushabh Shah
Hi,

I want to dynamically generate a nested json file. I have written a
function for it in PL/PGSQL that accepts 3 arrays. First one is an array of
all json fields, second one is an array of all json fields with columns
from tables present in db, third one mentions the type for all the fields
inside the json file.

This what I have so for that is working:

declare outputs text;
 begin
 outputs = '';
 for i in 1 .. array_upper(fieldtype, 1) loop
 select case
 when lower(fieldtype[i]) = 'field' then (outputs || '' ||
jsonb_build_object( fname[i], tcolumn[i] )::text)::text

when lower(fieldtype[i]) = 'json object' then (outputs || '' ||
jsonb_build_object( fname[i], jsonb_build_object() )::text)::text

 when lower(fieldtype[i]) = 'json array' then (outputs || '' ||
json_build_array( fname[i], json_build_array() )::text)::text

 else 'It is not field, object or an array'::text
end case into outputs
 from tblname;
end loop;
 return outputs;
end;

So, not for example the input for my function is:
fname: [‘passenger’, ‘firstname’, ‘lastname’, ‘address’, ‘city’, ‘state’,
‘country’]
tcolumn: [,’pass.fname’, ‘pass.lname’, , ‘address.city’, ‘address.state’,
‘address.country’]
ftype: [‘json object’, ‘field’, ‘field’, ‘json array’, ‘field’, ‘field’,
‘field’]

This is what I want my output to look like:
{
  passenger: {
   “firstname”: “john”,
   “lastname”: “smith”,
   “address”: [
 {
   “city”: “Houston”,
   “state”: “Texas”,
   “country”: “USA”
 }
]
}
}

But currently I am having difficulty adding firstname inside passenger json
object.

I know that I need to again loop through the json field names array to go
to next one inside jsonb_build_object() function to get the fields and
arrays inside but that would make my function very big. This is what I need
some assistance with.

Thanks for all the help.

Regards,
Rushabh


Re: suppress notices from inside a stored a plpgqsl function

2023-11-27 Thread Adrian Klaver

On 11/27/23 12:51, David Gauthier wrote:

Hi:

I have a plpgsql function that has this...

     drop table if exists tmp_diff_blkviews;

Even with the "if exists", I still get...

NOTICE:  table "tmp_diff_blkviews" does not exist, skipping
CONTEXT:  SQL statement "drop table if exists tmp_diff_blkviews"
PL/pgSQL function dvm.blkview_diffs(character varying,character varying) 
line 6 at SQL statement


I want to suppress that.   Even if the temp table exists, I don't want 
to hear about how it had to delete the table.  Just delete it if it 
exists and be quiet about it.


This function is being called through perl/dbi. So client side command 
line set options, or anything like that, is no good.  Is there  way to 
control messaging from inside the function ?




Per here:

https://www.postgresql.org/docs/current/plpgsql-errors-and-messages.html#PLPGSQL-STATEMENTS-RAISE

"Whether messages of a particular priority are reported to the client, 
written to the server log, or both is controlled by the log_min_messages 
and client_min_messages configuration variables. See Chapter 20 for more 
information."


I have not tried it but maybe SET client_min_messages to something above 
NOTICE in the function.


Or as part of the function creation:

https://www.postgresql.org/docs/current/sql-createfunction.html

SET configuration_parameter { TO value | = value | FROM CURRENT }

--
Adrian Klaver
adrian.kla...@aklaver.com





suppress notices from inside a stored a plpgqsl function

2023-11-27 Thread David Gauthier
Hi:

I have a plpgsql function that has this...

drop table if exists tmp_diff_blkviews;

Even with the "if exists", I still get...

NOTICE:  table "tmp_diff_blkviews" does not exist, skipping
CONTEXT:  SQL statement "drop table if exists tmp_diff_blkviews"
PL/pgSQL function dvm.blkview_diffs(character varying,character varying)
line 6 at SQL statement

I want to suppress that.   Even if the temp table exists, I don't want to
hear about how it had to delete the table.  Just delete it if it exists and
be quiet about it.

This function is being called through perl/dbi. So client side command line
set options, or anything like that, is no good.  Is there  way to control
messaging from inside the function ?


Re: IPV6 issue

2023-11-27 Thread Tom Lane
Adrian Klaver  writes:
> On 11/27/23 12:11, Atul Kumar wrote:
>> I found that localhost was set to .bash_profile and when I removed it 
>> and then re-attempted to connected the database using "psql postgres", I 
>> got this new error:
>> 
>> psql postgres -p 5432
>> psql: error: could not connect to server: No such file or directory
>>         Is the server running locally and accepting
>>         connections on Unix domain socket 
>> "/var/run/postgresql/.s.PGSQL.5432"?

> Do you have more then one version of psql installed?

Yeah, that.  You're apparently using a version of psql/libpq that
thinks the default Unix socket location is /var/run/postgresql;
but the postmaster you are using did not create a socket there.
(Probably it put one in /tmp instead, which is the out-of-the-box
default location.  But some distros consider that insecure so they
override it, typically to /var/run/postgresql/.)

The easiest workaround if you have a mishmash of Postgres libraries
is to tell the postmaster to create sockets in both places.
See "unix_socket_directories" parameter.

regards, tom lane




Re: IPV6 issue

2023-11-27 Thread Adrian Klaver

On 11/27/23 12:11, Atul Kumar wrote:

Hi,

I found that localhost was set to .bash_profile and when I removed it 
and then re-attempted to connected the database using "psql postgres", I 
got this new error:


psql postgres -p 5432
psql: error: could not connect to server: No such file or directory
         Is the server running locally and accepting
         connections on Unix domain socket 
"/var/run/postgresql/.s.PGSQL.5432"?


So DO I need to restart the postgres service or is there any other 
workaround?





In postgresql.conf or any include *.conf it points to does

port = 5432

and what is

listen_addresses

set to?

How did you install Postgres?

Do you have more then one version of psql installed?

--
Adrian Klaver
adrian.kla...@aklaver.com





Re: IPV6 issue

2023-11-27 Thread Atul Kumar
Hi,

I found that localhost was set to .bash_profile and when I removed it and
then re-attempted to connected the database using "psql postgres", I got
this new error:

psql postgres -p 5432
psql: error: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket
"/var/run/postgresql/.s.PGSQL.5432"?

So DO I need to restart the postgres service or is there any other
workaround?


Regards.

On Fri, Nov 24, 2023 at 1:08 AM Ron Johnson  wrote:

> On Thu, Nov 23, 2023 at 2:18 PM Atul Kumar  wrote:
>
>> I have postgres 12 running in centos 7 on my two machines, recently I
>> changed the authentication of entries of pg_hba.conf to scram-sha-256 for
>> localhost.
>>
>> Since then in my one machine, I have started getting the below error when
>> I use command "psql postgres"
>>
>> no pg_hba.conf entry for host "::1", user "postgres", database "postgres
>>
>>
>> I found that localhost is resolve to IPV6 by using below command:
>>
>> getent hosts localhost
>>
>> ::1 localhost localhost.localdomain localhost6
>> localhost6.localdomain6
>>
>>
>> Then I tested the same issue in my second machine
>>
>> getent hosts localhost
>>
>> ::1 localhost localhost.localdomain localhost6
>> localhost6.localdomain6
>>
>>
>> but in my second machine I didn't face any such issue while using command
>> "psql postgres", I was able to login into the database without such error.
>>
>>
>> The pg_hba.conf on both machines are identical without having any IPV6
>> entry in it.
>>
>>
>> The entry of pg_hba.conf is like below:
>>
>> # TYPE  DATABASEUSERADDRESS METHOD
>>
>>
>>
>> # "local" is for Unix domain socket connections only
>>
>> local   all   all
>> scram-sha-256
>>
>> # IPv4 local connections:
>>
>> hostall   postgres 127.0.0.1/32
>> scram-sha-256
>>
>>
>> I am not able to understand that my both machines are resolved to IPV6
>> then why is my first machine is throwing this error ?
>>
>
> That *is *curious.
>
> Have you exported PGHOST on either server (in, for example,
> .pgsql_profile, .bash_profile or .bashrc)?  If it is set to localhost on
> the primary server then psql will complain like you noticed.  If it is
> unset on the secondary server, then psql will use the domain socket and not
> complain.
>
>


Re: Could not read from file "pg_subtrans/00F5" at offset 122880: Success.

2023-11-27 Thread Laurenz Albe
On Mon, 2023-11-27 at 11:50 +, Sri Mrudula Attili wrote:
> ERROR:  could not access status of transaction 16087052
> DETAIL:  Could not read from file "pg_subtrans/00F5" at offset 122880: 
> Success.
> STATEMENT:  SELECT distinct 

That's data corruption.
Time to restore your backup.

Investigate how you got there.  Did you run "pg_resetwal"?
Did you restore a database from a file system backup?
Did you remove a "backup_label" file?  Is your hardware broken?

Yours,
Laurenz Albe




Re: Ways to aggregate table info from variable number of schemas

2023-11-27 Thread Christophe Pettus



> On Nov 27, 2023, at 10:16, Dominique Devienne  wrote:
> Which means you can't do a declarative SQL query for those
> metadata across projects, since you can't do static / non-dynamic SQL across 
> schemas.

I'm not sure I understand this.  Schemas are just namespaces, and all queries 
have access to them (assuming their roles have the appropriate permissions on 
them).  Or do you mean this is a limitation of your system, because of how the 
permissions are granted?



Ways to aggregate table info from variable number of schemas

2023-11-27 Thread Dominique Devienne
Our PostgreSQL-based system uses a small number of fixed schemas,
and an arbitrary number of per-project schemas (one schema per project).
This will is given, sorry, cannot be changed. Really.

The fixed schemas contain metadata about the projects (including which
schema a given project uses).

My problem is that some of the important metadata is not in the fixed
(administrative) schemas,
but in the per-project schemas. Which means you can't do a declarative SQL
query for those
metadata across projects, since you can't do static / non-dynamic SQL
across schemas.

I've brainstormed the different ways I could easily query those
"spread-out" metadata,
and I'm trying my chance at community input. Here's what I've considered so
far:

1) Mega union-all view, updated (programmatically) when schemas are
added/removed
2) Concrete tables in one of the fixed schemas, the project-schemas push
data to, via triggers
3) Partitioned table. No experience with it, but from what I've read, a
manually created partitioned table with manually added/removed partition
seems viable. The caveat being that I don't have a (virtual) column right
now in the per-project schema tables that would identify the rows across
schemas/projects. But that could be added, if this is viable.
4) SQL functions taking a project-id, and doing dynamic SQL inside against
a given schema, to fetch the metadata of interest for that SQL function.
Most of a PULL approach, which allows a kind of hidden dynamic JOIN when
scanning the projects from the fixed schema table(s). This might be the
simplest?

Maybe there are other options too?

Basically I need to do a little data-warehouse of sort, no?

Insights, inputs, would be greatly appreciated. Thanks, --DD

PS: A recent comment by Tom seemed to indicate that union-all views are not
well optimized, and don't trim branches of the union-all based on the where
clause. So would not scale well (one internal install has 3000 projects)

PPS: I like the idea of the partitioned table, since could do efficient
partition elimination, but this is completely new territory for me, and I
fear the complexity might be high. Thoughts?


Re: Emitting JSON to file using COPY TO

2023-11-27 Thread Dominique Devienne
On Mon, Nov 27, 2023 at 5:04 PM Adrian Klaver 
wrote:

> On 11/27/23 01:44, Dominique Devienne wrote:
> > On Mon, Nov 27, 2023 at 10:33 AM Dominique Devienne  > > wrote:
> > On second thought, I guess that's COPY in its text modes doing the
> escaping?
> > Interesting. The text-based modes of COPY are configurable. There's even
> > a JSON mode.
>
> Where are you seeing the JSON mode for COPY? AFAIK there is only text
> and CSV formats.
>

Indeed. Somehow I thought there was...
I've used the TEXT and BINARY modes, and remembered a wishful thinking JSON
mode!
OK then, if there was, then what I wrote would apply :). --DD


Re: Could not read from file "pg_subtrans/00F5" at offset 122880: Success.

2023-11-27 Thread Adrian Klaver

On 11/27/23 03:50, Sri Mrudula Attili wrote:

Hello,

Post refresh of a Delphix VDB , while trying to connect from application 
users are getting the below error.


Could also see these errors in postgresql.log

Error Message:

< 2023-11-24 12:24:10.031 GMT >ERROR:  could not access status of 
transaction 16087052
< 2023-11-24 12:24:10.031 GMT >DETAIL:  Could not read from file 
*"pg_subtrans/00F5" at offset 122880: Success.*

< 2023-11-24 12:24:10.031 GMT >STATEMENT:  SELECT distinct

Postgresql Version:

--
  PostgreSQL 11.16 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 
20150623 (Red Hat 4.8.5-44), 64-bit


From this:

https://docs.delphix.com/docs537/delphix-administration/postgresql-environments-and-data-sources/provisioning-vdbs-from-postgresql-dsources/provisioning-a-postgresql-vdb

there seems to be a lot of moving parts to using Delphix VDB with a 
Postgres database.  You are going to need to provide more information 
about your setup.





Thanks,

Sri Attili


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Emitting JSON to file using COPY TO

2023-11-27 Thread Adrian Klaver

On 11/27/23 01:44, Dominique Devienne wrote:
On Mon, Nov 27, 2023 at 10:33 AM Dominique Devienne > wrote:





On second thought, I guess that's COPY in its text modes doing the escaping?
Interesting. The text-based modes of COPY are configurable. There's even 
a JSON mode.


Where are you seeing the JSON mode for COPY? AFAIK there is only text 
and CSV formats.


By miracle, would the JSON output mode recognize JSON[B] values, and 
avoid the escaping?





--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Parallel Index Scan Implementation

2023-11-27 Thread Ron Johnson
On Mon, Nov 27, 2023 at 4:11 AM Brajendra Pratap Singh <
singh.bpratap...@gmail.com> wrote:

> Hi Postgresql Experts,
>
> We have a basic requirement where we need to implement the parallel index
> scan instead of parallel seq scan at table level. Please suggest the best
> way to do this.
>
> Postgresql DB Version : 12.6
>

That's far behind the latest patch level.


>
> Table size : 2-5 GB
>
> OS : Centos-7
>
> RAM : 96 GB
> CPU : 32
>
> shared_buffer=20GB
>

Plural: shared_buffers


> Effective_cache_size=60GB
>
> Max_worker_process = 32
> Max_parallel_worker=32
> Max_parallel_worker_per_gather=8
>
> Please let me know if any further details required .
>

Is there proper index support on the relevant columns?  Have you analyzed
the tables?  Are the values you're filtering on  uncommon enough for PG to
think that indexes are relevant?


Re: Emitting JSON to file using COPY TO

2023-11-27 Thread Dominique Devienne
On Mon, Nov 27, 2023 at 3:56 PM Tom Lane  wrote:

> "David G. Johnston"  writes:
> > I agree there should be a copy option for “not formatted” so if you dump
> a
> > single column result in that format you get the raw unescaped contents of
> > the column.
>
> I'm not sure I even buy that.  JSON data in particular is typically
> multi-line, so how will you know where the row boundaries are?
> That is, is a newline a row separator or part of the data?
>
> You can debate the intelligence of any particular quoting/escaping
> scheme, but imagining that you can get away without having one at
> all will just create its own problems.
>

What I was suggesting is not about a "not formatted" option.
But rather than JSON values (i.e. typed `json` or `jsonb`) in a
JSON-formatted COPY operator, the JSON values should not be
serialized to text that is simply output as a JSON-text-value by COPY,
but "inlined" as a "real" JSON value without the JSON document output by
COPY.

This is a special case, where the inner and outer "values" (for lack of a
better terminology)
are *both* JSON documents, and given that JSON is hierarchical, the inner
JSON value can
either by 1) serializing to text first, which must thus be escaped using
the JSON escaping rules,
2) NOT serialized, but "inline" or "spliced-in" the outer COPY JSON
document.

I guess COPY in JSON mode supports only #1 now? While #2 makes more sense
to me.
But both options are valid. Is that clearer?

BTW, JSON is not multi-line, except for insignificant whitespace.
So even COPY in JSON mode is not supposed to be line based I guess?
Unless COPY in JSON mode is more like NDJSON (https://ndjson.org/)? --DD


Re: PostgreSql: Canceled on conflict out to old pivot

2023-11-27 Thread Ron Johnson
On Mon, Nov 27, 2023 at 2:17 AM Wirch, Eduard 
wrote:

> Hi
>
> We have a PostgreSql 15 server serving around 30 databases, one schema
> each with the same layout. Each database is used by one application
> instance. The application consistently uses transactions with isolation
> level serializable to access the database, optimizing by using explicit
> read only transactions, where applicable. Once the server reaches 100% we
> get an increased amount of serialize conflict errors.
>

Maybe I haven't had my coffee yet, but 100% of what?


Re: Emitting JSON to file using COPY TO

2023-11-27 Thread Tom Lane
"David G. Johnston"  writes:
> I agree there should be a copy option for “not formatted” so if you dump a
> single column result in that format you get the raw unescaped contents of
> the column.

I'm not sure I even buy that.  JSON data in particular is typically
multi-line, so how will you know where the row boundaries are?
That is, is a newline a row separator or part of the data?

You can debate the intelligence of any particular quoting/escaping
scheme, but imagining that you can get away without having one at
all will just create its own problems.

regards, tom lane




Re: Removing oids with pg_repack

2023-11-27 Thread CG
 

On Wednesday, November 22, 2023 at 12:38:54 PM EST, Achilleas Mantzios 
 wrote:  
 
  Στις 22/11/23 15:14, ο/η CG έγραψε:
  
 
 
  
  On Wednesday, November 22, 2023 at 01:20:18 AM EST, Achilleas Mantzios 
 wrote:  
  
 Στις 21/11/23 20:41, ο/η CG έγραψε:
  
 
I have a very large PostgreSQL 9.5 database that still has very large 
tables with oids. I'm trying to get rid of the oids with as little downtime as 
possible so I can prep the database for upgrade past PostgreSQL 11. I had a 
wild idea to mod pg_repack to write a new table without oids. I think it almost 
works.  
  To test out my idea I made a new table wipe_oid_test with oids. I filled it 
with a few rows of data. 
  
  But PostgreSQL still thinks that the table has oids: 
mydata=# \d+ wipe_oid_test                    Table "public.wipe_oid_test"  
Column | Type | Modifiers | Storage  | Stats target | Description  
+--+---+--+--+-  k      | 
text | not null  | extended |              |   v      | text |           | 
extended |              |  Indexes:     "wipe_oid_test_pkey" PRIMARY KEY, btree 
(k) Has OIDs: yes
 Except where does it mention in the pg_repack docs (or source) that it is 
meant to be used for NO OIDS conversion ? 
  It does not-- I was trying to leverage and tweak the base functionality of 
pg_repack which sets up triggers and migrates data. I figured if the target 
table was created without OIDs that when pg_repack did the "swap" operation 
that the new table would take over with the added bonus of not having oids. 
 
 I can modify pg_class and set relhasoids = false, but it isn't actually 
eliminating the oid column. `\d+` will report not report that it has oids, but 
the oid column is still present and returns the same result before updating 
pg_class. 
   
 Just Dont! 
  Noted. ;) 
 
  So I'm definitely missing something. I really need a point in the right 
direction Please help! ;) 
   
 
There are a few of methods to get rid of OIDs :
 - ALTER TABLE .. SET WITHOUT OIDS (just mentioning, you already checked that)
  
  This makes the database unusable for hours and hours and hours because it 
locks the table entirely while it performs the operation. That's just something 
that we can't afford. 
  - Use table copy +  use of a trigger to log changes : 
https://dba.stackexchange.com/questions/259359/eliminating-oids-while-upgrading-postgresql-from-9-4-to-12
 
  That SO is not quite the effect I'm going for. The poster of that SO was 
using OIDS in their application and needed a solution to maintain those values 
after conversion. I simply want to eliminate them without the extraordinary 
downtime the database would experience during ALTER operations.  Sorry I 
meant this one : Stripping OIDs from tables in preparation for pg_upgrade



| 
| 
| 
|  |  |

 |

 |
| 
|  | 
Stripping OIDs from tables in preparation for pg_upgrade

I have a postgres database in RDS, file size approaching 1TB. We started in 
2005, using ruby/activerecord/rails...
 |

 |

 |



This is the same idea as the percona ETL strategy, and essentially 90% of what 
pg_repack already does (creates new tables, sets up triggers, locks the tables, 
and swaps new for old at the end of the process) 

 
  
  
- Use of Inheritance (the most neat solution I have seen, this is what I used 
for a 2TB table conversion) : 
https://www.percona.com/blog/performing-etl-using-inheritance-in-postgresql/
This is closest to the effect I was going for. pg_repack essentially 
creates a second table and fills it with the data from the first table while 
ensuring standard db operations against that table continue to function while 
the data is being moved from the old table to the new table. The process 
outlined in the Percona ETL strategy has to be repeated per-table, which is 
work I was hoping to avoid by leveraging 95% of the functionality of pg_repack 
while supplying my own 5% as the resulting table would not have oids regardless 
of the source table's configuration. 
  For my experiment, Table A did have oids. Table B (created by pg_repack) did 
not (at least at creation). When the "swap" operation happened in pg_repack, 
the metadata for Table A was assigned to Table B. I'm just trying to figure out 
what metadata I need to change in the system tables to reflect the actual table 
structure.  
  I have the fallback position for the Percona ETL strategy. But I feel like 
I'm REALLY close with pg_repack and I just don't understand enough about the 
system internals to nudge it to correctness and need some expert assistance to 
tap it in the hole. 
 Why don't just inspect the code pg_repack ? 
I have, and I have modified pg_repack (modification was shown in my first post) 
to create and write to a new table without oids, the problem is when the "swap" 
operation happens the old tabledefs with all the old oid baggage gets mapped on 
top of the new table that doesn't have 

Re: Emitting JSON to file using COPY TO

2023-11-27 Thread David G. Johnston
On Monday, November 27, 2023, Pavel Stehule  wrote:

> Hi
>
> po 27. 11. 2023 v 14:27 odesílatel David G. Johnston <
> david.g.johns...@gmail.com> napsal:
>
>> On Monday, November 27, 2023, Dominique Devienne 
>> wrote:
>>
>>> There's even a JSON mode.
>>> By miracle, would the JSON output mode recognize JSON[B] values, and
>>> avoid the escaping?
>>>
>>
>> I agree there should be a copy option for “not formatted” so if you dump
>> a single column result in that format you get the raw unescaped contents of
>> the column. As soon as you ask for a format your json is now embedded so it
>> is a value within another format and any structural aspects of the wrapper
>> present in the json text representation need to be escaped.
>>
>
> Is it better to use the LO API for this purpose?  It is native for not
> formatted data.
>

Using LO is, IMO, never the answer.  But if you are using a driver API
anyway just handle the normal select query result.

David J.


Re: Emitting JSON to file using COPY TO

2023-11-27 Thread Pavel Stehule
Hi

po 27. 11. 2023 v 14:27 odesílatel David G. Johnston <
david.g.johns...@gmail.com> napsal:

> On Monday, November 27, 2023, Dominique Devienne 
> wrote:
>
>> There's even a JSON mode.
>> By miracle, would the JSON output mode recognize JSON[B] values, and
>> avoid the escaping?
>>
>
> I agree there should be a copy option for “not formatted” so if you dump a
> single column result in that format you get the raw unescaped contents of
> the column. As soon as you ask for a format your json is now embedded so it
> is a value within another format and any structural aspects of the wrapper
> present in the json text representation need to be escaped.
>

Is it better to use the LO API for this purpose?  It is native for not
formatted data.

Regards

Pavel


> David J.
>


Re: Emitting JSON to file using COPY TO

2023-11-27 Thread David G. Johnston
On Monday, November 27, 2023, Dominique Devienne 
wrote:

> There's even a JSON mode.
> By miracle, would the JSON output mode recognize JSON[B] values, and avoid
> the escaping?
>

I agree there should be a copy option for “not formatted” so if you dump a
single column result in that format you get the raw unescaped contents of
the column. As soon as you ask for a format your json is now embedded so it
is a value within another format and any structural aspects of the wrapper
present in the json text representation need to be escaped.

David J.


Re:Could not read from file "pg_subtrans/00F5" at offset 122880: Success.

2023-11-27 Thread Sri Mrudula Attili

Hello,

Post refresh of a Delphix VDB , while trying to connect from application 
users are getting the below error.


Could also see these errors in postgresql.log

Error Message:

< 2023-11-24 12:24:10.031 GMT >ERROR:  could not access status of 
transaction 16087052
< 2023-11-24 12:24:10.031 GMT >DETAIL:  Could not read from file 
*"pg_subtrans/00F5" at offset 122880: Success.*

< 2023-11-24 12:24:10.031 GMT >STATEMENT:  SELECT distinct

Postgresql Version:

--
 PostgreSQL 11.16 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 
20150623 (Red Hat 4.8.5-44), 64-bit



Thanks,

Sri Attili

Re: Emitting JSON to file using COPY TO

2023-11-27 Thread Dominique Devienne
On Mon, Nov 27, 2023 at 10:33 AM Dominique Devienne 
wrote:

> On Sat, Nov 25, 2023 at 10:00 PM Adrian Klaver 
> wrote:
>
>> On 11/25/23 11:21, Davin Shearer wrote:
>> > Hello!
>> >
>> > I'm trying to emit a JSON aggregation of JSON rows to a file using COPY
>> > TO, but I'm running into problems with COPY TO double quoting the
>> > output.   Here is a minimal example that demonstrates the problem I'm
>> > having:
>> >
>>
>> > I have tried to get COPY TO to copy the results to file "as-is" by
>> > setting the escape and the quote characters to the empty string (''),
>> > but they only apply to the CSV format.
>> >
>> > Is there a way to emit JSON results to file from within postgres?
>> > Effectively, nn "as-is" option to COPY TO would work well for this JSON
>> > use case.
>> >
>>
>> Not using COPY.
>>
>> See David Johnson's post for one way using the client psql.
>>
>> Otherwise you will need to use any of the many ETL programs out there
>> that are designed for this sort of thing.
>>
>
> Guys, I don't get answers like that. The JSON spec is clear:
>

Oops, sorry, user error. --DD

PS: The JSON spec is a bit ambiguous. First it says

> Any codepoint except " or \ or control characters

And then is clearly shows \" as a valid sequence...
Sounds like JQ is too restrictive?

Or that's the double-escape that's the culprit?
i.e. \\ is in the final text, so that's just a backslash,
and then the double-quote is no longer escaped.

I've recently noticed json_agg(row_to_json(t))
is equivalent to json_agg(t)

Maybe use that instead? Does that make a difference?

I haven't noticed wrong escaping of double-quotes yet,
but then I'm using the binary mode of queries. Perhaps that matters.

On second thought, I guess that's COPY in its text modes doing the escaping?
Interesting. The text-based modes of COPY are configurable. There's even a
JSON mode.
By miracle, would the JSON output mode recognize JSON[B] values, and avoid
the escaping?


Re: Emitting JSON to file using COPY TO

2023-11-27 Thread Dominique Devienne
On Sat, Nov 25, 2023 at 10:00 PM Adrian Klaver 
wrote:

> On 11/25/23 11:21, Davin Shearer wrote:
> > Hello!
> >
> > I'm trying to emit a JSON aggregation of JSON rows to a file using COPY
> > TO, but I'm running into problems with COPY TO double quoting the
> > output.   Here is a minimal example that demonstrates the problem I'm
> > having:
> >
>
> > I have tried to get COPY TO to copy the results to file "as-is" by
> > setting the escape and the quote characters to the empty string (''),
> > but they only apply to the CSV format.
> >
> > Is there a way to emit JSON results to file from within postgres?
> > Effectively, nn "as-is" option to COPY TO would work well for this JSON
> > use case.
> >
>
> Not using COPY.
>
> See David Johnson's post for one way using the client psql.
>
> Otherwise you will need to use any of the many ETL programs out there
> that are designed for this sort of thing.
>

Guys, I don't get answers like that. The JSON spec is clear:

>


Re: How to eliminate extra "NOT EXISTS"-query here?

2023-11-27 Thread Dominique Devienne
On Sat, Nov 25, 2023 at 5:53 PM hector vass  wrote:

> Not sure you need to use array why not simple table joins, so a table with
> your criteria x y z t joined to stuff to give you candidates that do match,
> then left join with coalesce to add the 'd'
>
> select
>
> --a.id,b.test_id,
>
> coalesce(a.id,b.test_id) as finalresult
>
> from test a
>
> left join (
>
> select
>
> test_id
>
> from stuff a
>
> inner join (values ('x'),('y'),('z'),('t')) b (v) using(v)
>
> group by 1
>
> )b on(a.id=b.test_id);
>

Hi Hector. Hopefully this is not a stupid question...

How is that equivalent from the `NOT ARRAY ... <@ ...` though?
The inner-join-distinct above will return test_id's on any match, but you
can't know if all array values are matches. Which is different from

> Is the first array contained by the second

from the <@ operator, no?
I'm unfamiliar with these operators, so am I missing something?
Just trying to understand the logic here. Thanks, --DD


Parallel Index Scan Implementation

2023-11-27 Thread Brajendra Pratap Singh
Hi Postgresql Experts,

We have a basic requirement where we need to implement the parallel index
scan instead of parallel seq scan at table level. Please suggest the best
way to do this.

Postgresql DB Version : 12.6

Table size : 2-5 GB

OS : Centos-7

RAM : 96 GB
CPU : 32

shared_buffer=20GB
Effective_cache_size=60GB

Max_worker_process = 32
Max_parallel_worker=32
Max_parallel_worker_per_gather=8

Please let me know if any further details required .

Thanks & Regards,

Singh