Re: Moving to Postgresql database

2024-01-16 Thread Dominique Devienne
On Mon, Jan 15, 2024 at 5:17 AM veem v  wrote:

> Is any key design/architectural changes should the app development team
> [...], should really aware about
>

Hi. One of the biggest pitfall of PostgreSQL, from the app-dev perspective,
is the fact any failed statement fails the whole transaction, with ROLLBACK
as the only recourse.

So if you have any code that does
try-something-and-if-it-fails-do-something-else,
which works in most RDBMS AFAIK, then that's NOT going to work with
PostgreSQL.

I think there's an extension to add it (don't recall its name), but I'm
always surprise it's not built-in,
even just as an opt-in choice. But maybe AWS Aurora is different in that
regard? I'm talking OSS PostgreSQL.

OTOH, one the biggest benefits of PostgreSQL vs Oracle is transactional
DDLs.
But for many/most, DDLs are mostly fixed, so doesn't matter as much as it
does to us.

libpq is much better than OCI, although nowdays there's a better official C
API on top of OCI.
And the protocol being open and OSS, unlike Oracle SQL*NET, there are
alternate pure-JS,
pure-RUST, pure-GO, etc... implementations beside libpq to suit the
client-side dev-stack better.

Of course, Oracle is batteries-included, while PostgreSQL relies on its
vast extension ecosystem instead.
Except you're limited to the (small) subset that intersect the Cloud
vendors managed PostgreSQL offer, if
you must also support those...

Another major difference is that the catalogs (dictionaries) in PostgreSQL
are fully open (modulo pg_authid and a few others).
So there's no USER_, ALL_, DBA_ variants that hide what objects exist in
the cluster, depending on privileges, like there is in Oracle.
Knowing an object exists doesn't mean you can access it, but that's a no-no
for some security-wise.

If you care about LOBs, Oracle SecureFile are (way?) faster, last we tested
a long time ago.
OTOH, PostgreSQL bytea is much larger and convenient that Oracle's RAW (but
that's also very dated info).

These are the main ones that come to mind. I'm sure there are many others.
FWIW. --DD


Re: Software Bill of Materials (SBOM)

2024-01-16 Thread Кристина Валентей
thank you Julian for your answer.

SCANOSS is very good, but it is probably not suitable for searching
external libraries that POSTGRESQL uses.

Therefore, I again ask you to tell me if there is a ready-made SBOM file
for project POSTGRESQL, or a tool that can create it based on the source
code C

сб, 13 янв. 2024 г. в 14:10, Julian Coccia :

> Hi Cristina,
>
>
>
> Have you tried SCANOSS?
>
>
>
> To install:
>
> pip3 install scanoss
>
>
>
> To generate your SBOM (SPDX lite):
>
> scanoss-py scan --format spdxlite DIRECTORY/
>
>
>
> Alternatively, in CycloneDX format instead:
>
> scanoss-py scan --format cyclonedx DIRECTORY/
>
>
> Hope this helps.
>
>
>
> Regards,
>
> Julian
>
>
>
> *From: *Кристина Валентей 
> *Date: *Saturday, 13 January 2024 at 12:03
> *To: *pgsql-general@lists.postgresql.org <
> pgsql-general@lists.postgresql.org>
> *Subject: *Software Bill of Materials (SBOM)
>
> Good afternoon.
> I'm looking for a way to build sbom files for assembly postgresql, to
> perform software composition analysis (SCA).
>
> Please, tell me how can I do this?
>
> Thank you.
>


RE: data migration using EXTENSION tds_fdw

2024-01-16 Thread Eyüp Liste
Hello,

1)TURKISH_CI_AS at sql server database

2)Encoding : UTF8 Collate: en_US.UTF-8  at Postgresql

Thank u for fast response 

Best Regards

Eyup Liste
Senior Database Administrator
___

TurkNet Iletisim







M: +905452120762
E: eyup.li...@turk.net

-Original Message-
From: Adrian Klaver  
Sent: Monday, January 15, 2024 7:10 PM
To: Eyüp Liste ; pgsql-general@lists.postgresql.org
Subject: Re: data migration using EXTENSION tds_fdw


[UYARI] Bu ileti TurkNet dışından gönderildi. Lütfen gönderen adresleri kontrol 
edin ve ekleri açmadan önce dikkatli olun.
Spam/phishing olduğunu düşündüğünüz mailler için 
spambil...@turknet.net.tr adresine bilgi 
verebilirsiniz.


On 1/14/24 10:55, Eyüp Liste wrote:
> Hello,
>
> I have worked on migrate a table from sql server 2017 to postgresql 15 .
> İ have completed all steps. But when i try to migrate some columns , i 
> got error;
>
> NOTICE: tds_fdw: Query executed correctly NOTICE: tds_fdw: Getting 
> results ERROR: DB-Library error: DB #: 2403, DB Msg: Some character(s) 
> could not be converted into client's character set. Unconverted bytes 
> were changed to question marks ('?'), OS #: 0, OS Msg: Success, Level: 
> 4 SQL state: HV00L

This "Some character(s) could not be converted into client's character set." 
would seem to be the issue.

Therefore what is the character set used in the?:

1) The SQL Server database.

2) The Postgres database

>
> Please help me about this issue.
>
> Best regards
>
> *turknet-logo* **
>
>
>
> *Eyüp Liste*
> Technology
> System And Infrastructure
>

--
Adrian Klaver
adrian.kla...@aklaver.com



/usr/local/sisis-pap/pgsql-15.1/bin/postmaster SIGSEGV in podman container on MacOS

2024-01-16 Thread Matthias Apitz


Hello,

I've built a podman container on MacOS which includes

- a SuSE amd64 OS from registry.suse.com/bci/bci-base:15.4

- enough installed commands:
  vim openssh-server tar gzip which awk libgnutls30 glibc-locale
  strace telnet openssh-clients lsof gdb

- installed our application packes, one of them brings our own compiled
  PostgreSQL 15 below /usr/local/sisis-pap/pgsql-15.1

- our own applications

After starting the container on my MacBook Pro with

$ podman run -p 2022:22 -t suse

I can SSH into the container with:

$ ssh -p 2022 root@localhost

and I see the following problem:

Our C-written applications are starting fine, as this log file shows:

sisis@:/home/sisis> tail /var/spool/sisis/catserver/log/catserver.log
16.01.2024 10:57:07.990  
===
16.01.2024 10:57:07.990
16.01.2024 10:57:07.990   SLNP Server
16.01.2024 10:57:07.990
16.01.2024 10:57:07.990  
---
16.01.2024 10:57:07.990
16.01.2024 10:57:07.990  CATServer <392> : started at : 16.01.2024  10:57:07

16.01.2024 10:57:08.063   SLNP-Demon CATServer <392> : Error in Open Database 
: rc1:999 rc2:0 rc3:-402

The server can't open the database per ESQL/C because the PostgreSQL
server is not running:

:/root # ps ax
PID TTY  STAT   TIME COMMAND
  1 pts/0Ssl+   0:00 /usr/bin/qemu-x86_64-static /bin/sh -c /usr/local/b
 43 pts/0Sl+0:00 /usr/bin/qemu-x86_64-static /usr/sbin/sshd -D
 67 ?Ssl0:00 /usr/bin/qemu-x86_64-static /usr/sbin/sshd -D -R
 72 pts/1Ssl0:00 /usr/bin/qemu-x86_64-static /bin/bash
416 ?Rl+0:00 /usr/bin/ps ax

The reason for this is:

gdb /usr/local/sisis-pap/pgsql-15.1/bin/postmaster 
/data/postgresql151/data/qemu_postgres_20240116-094342_12.core
GNU gdb (GDB; SUSE Linux Enterprise 15) 12.1
...
Core was generated by `/usr/local/sisis-pap/pgsql-15.1/bin/postmaster -D 
/data/postgresql151/data'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x2b8f9b4d in rand_pool_add_additional_data ()
   from /usr/local/sisis-pap/lib/libcrypto.so.1.1
Missing separate debuginfos, use: zypper install 
glibc-debuginfo-2.31-150300.63.1.x86_64 
libaudit1-debuginfo-3.0.6-150400.4.13.1.x86_64 
libffi7-debuginfo-3.2.1.git259-10.8.x86_64 
libgmp10-debuginfo-6.1.2-4.9.1.x86_64 
libgnutls30-debuginfo-3.7.3-150400.4.35.1.x86_64 
libhogweed6-debuginfo-3.7.3-150400.2.21.x86_64 
libidn2-0-debuginfo-2.2.0-3.6.1.x86_64 
libjitterentropy3-debuginfo-3.4.0-15.1.9.1.x86_64 
libnettle8-debuginfo-3.7.3-150400.2.21.x86_64 
libp11-kit0-debuginfo-0.23.22-150400.1.10.x86_64 
libtasn1-6-debuginfo-4.13-15.4.8.1.x86_64 
libunistring2-debuginfo-0.9.10-1.1.x86_64 
pam-debuginfo-1.3.0-15.6.61.1.x86_64
(gdb) bt
#0  0x2b8f9b4d in rand_pool_add_additional_data ()
   from /usr/local/sisis-pap/lib/libcrypto.so.1.1
#1  0x2b8f8371 in rand_drbg_get_additional_data ()
   from /usr/local/sisis-pap/lib/libcrypto.so.1.1
#2  0x2b8f7291 in RAND_DRBG_bytes ()
   from /usr/local/sisis-pap/lib/libcrypto.so.1.1
#3  0x00b1a11a in pg_strong_random ()
#4  0x0087a460 in RandomCancelKey ()
#5  0x0087af71 in assign_backendlist_entry ()
#6  0x0087acdc in do_start_bgworker ()
#7  0x0087b20f in maybe_start_bgworkers ()
#8  0x008780c3 in reaper ()
#9  
#10 0x2c50e076 in select () from /lib64/libc.so.6
#11 0x0087625b in ServerLoop ()
#12 0x00875c4e in PostmasterMain ()
#13 0x007894a1 in main ()
(gdb) quit
:/root # date
Di 16. Jan 10:11:33 UTC 2024

:/root # uname -a
Linux 0a6d5e158921 6.6.8-200.fc39.aarch64 #1 SMP PREEMPT_DYNAMIC Thu Dec 21 
04:21:07 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux


Any idea what could be missing here? 

Only btw: The container built on RedHat runs fine on RedHat and also on
SuSE Linux when the container gets pushed from RedHat to SuSE, i.e.
without rebuilding it.

-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub

I am not at war with Russia.  Я не воюю с Россией.
Ich bin nicht im Krieg mit Russland.




postgres sql assistance

2024-01-16 Thread arun chirappurath
Dear all,

I am an accidental postgres DBA and learning things every day. Apologies
for my questions if not properly drafted.

I am trying to load data from the temp table to the main table and catch
the exceptions inside another table.

temp table is cast with the main table data type and trying to load the
data.

temp table is below.

category_name  |description
 | is_active
---+-+---
 *Tech123212312312323233213123123123123*| Furniture and home decor
   | true
 *Tech123212312312323233213123123123123*| Electronic devices and
accessories  | true
 Elec| Books of various genres
| *15*
 TV  | Books
| *12*
 cla | Apparel and fashion accessories
| true

category name is varchar(25) and is_active is boolean in main table. So i
should get exceptions for 1st,2nd for category_name rows and 4 and 5th rows
for boolean. In exception table results,its only showing

Exception table is below. Here instead of showing exception for value 12 in
the is_active table its showing old exception for 15 itself.. Script is
attached,,...SQLERRM value is not getting updated for row 12..WHat could be
the reason for this?

value too long for type character varying(25) category_name 1 2024-01-16
16:17:01.279 +0530 value too long for type character varying(25)
description 2 2024-01-16 16:17:01.279 +0530 invalid input syntax for type
boolean: "15" is_active 3 2024-01-16 16:17:01.279 +0530 *invalid input
syntax for type boolean: "15" * 4 2024-01-16 16:17:01.279 +0530 *invalid
input syntax for type boolean: "15"* 5 2024-01-16 16:17:01.279 +0530
CREATE OR REPLACE FUNCTION insert_temp_data_to_main_table()
RETURNS VOID AS $$
DECLARE
v_main_table_name TEXT := 'main_categories';
v_temp_table_name TEXT := 'tmp_categories';
v_error_table_name TEXT := 'error_log_table';
v_sql_statement TEXT;
BEGIN
-- Clear the error log table
EXECUTE 'TRUNCATE TABLE ' || v_error_table_name;

-- Build the complete SQL statement with aggregated columns and select 
clauses
v_sql_statement := format('
INSERT INTO %I (%s)
SELECT %s
FROM %I',
v_main_table_name,
(SELECT string_agg(column_name, ', ') FROM information_schema.columns 
WHERE table_name = v_main_table_name),
(SELECT string_agg('CAST(' || v_temp_table_name || '.' || column_name 
|| ' AS ' || data_type || ')', ', ') FROM information_schema.columns WHERE 
table_name = v_temp_table_name),
v_temp_table_name);

-- Print the SQL statement
RAISE NOTICE 'Generated SQL statement: %', v_sql_statement;

-- Insert data into the main table from the temp table
EXECUTE v_sql_statement;

EXCEPTION
WHEN others THEN
DECLARE
v_error_msg TEXT;
v_failed_column_name TEXT;
v_row_counter INT := 1;
BEGIN
-- Get the specific error message
v_error_msg := SQLERRM;

-- Get the failed column name
SELECT column_name INTO v_failed_column_name
FROM information_schema.columns
WHERE table_name = v_temp_table_name
ORDER BY ordinal_position
LIMIT 1 OFFSET v_row_counter - 1;

-- Log the error into the error log table
EXECUTE format('
INSERT INTO %I (error_message, failed_column_name, 
failed_row_number)
VALUES ($1, $2, $3)', v_error_table_name)
USING v_error_msg, v_failed_column_name, v_row_counter;
END;
END;
$$ LANGUAGE plpgsql;


Nested-Internal Functions

2024-01-16 Thread Rossana Ocampos
Hello ,

I have a query about creating nested functions in PostgreSQL.

I am currently using PostgreSQL 15 and I am trying to create a nested
function with the following structure:

CREATE OR REPLACE FUNCTION external_function ()

RETURNS void AS $$

DECLARE

external_variable;



-- Define the internal function

FUNCTION internal_function ()

RETURNS void AS $$

DECLARE

internal_variable INT;

BEGIN

-- Internal function code

internal_variable:= 10;

RAISE NOTICE 'Internal Variable: %', internal_variable;

END;

$$ LANGUAGE plpgsql;

BEGIN

-- External function code

external_variable:= 5;

RAISE NOTICE 'External variable: %', external_variable;

 

-- Call internal function

PERFORM internal_function ();

END;

$$ LANGUAGE plpgsql;

 

However, I get an error, and I can't compile the function.

Thank you very much for your help

Rossana Ocampos 



Aw: Nested-Internal Functions

2024-01-16 Thread Karsten Hilbert
> I am currently using PostgreSQL 15 and I am trying to create a nested 
> function with the following structure:
...
 
> However, I get an error

What *is* the error ?

Karsten




Re: postgres sql assistance

2024-01-16 Thread Raul Giucich
Hi Arun, can you share the sql used for this insert. Visually it seems some
character are affecting the data.
Best regards,
Raul

El mar, 16 ene 2024 a la(s) 9:35 a.m., arun chirappurath (
arunsnm...@gmail.com) escribió:

> Dear all,
>
> I am an accidental postgres DBA and learning things every day. Apologies
> for my questions if not properly drafted.
>
> I am trying to load data from the temp table to the main table and catch
> the exceptions inside another table.
>
> temp table is cast with the main table data type and trying to load the
> data.
>
> temp table is below.
>
> category_name  |description
>| is_active
>
> ---+-+---
>  *Tech123212312312323233213123123123123*| Furniture and home decor
>  | true
>  *Tech123212312312323233213123123123123*| Electronic devices and
> accessories  | true
>  Elec| Books of various genres
> | *15*
>  TV  | Books
> | *12*
>  cla | Apparel and fashion accessories
> | true
>
> category name is varchar(25) and is_active is boolean in main table. So i
> should get exceptions for 1st,2nd for category_name rows and 4 and 5th rows
> for boolean. In exception table results,its only showing
>
> Exception table is below. Here instead of showing exception for value 12
> in the is_active table its showing old exception for 15 itself.. Script is
> attached,,...SQLERRM value is not getting updated for row 12..WHat could be
> the reason for this?
>
> value too long for type character varying(25) category_name 1 2024-01-16
> 16:17:01.279 +0530 value too long for type character varying(25)
> description 2 2024-01-16 16:17:01.279 +0530 invalid input syntax for type
> boolean: "15" is_active 3 2024-01-16 16:17:01.279 +0530 *invalid input
> syntax for type boolean: "15" * 4 2024-01-16 16:17:01.279 +0530 *invalid
> input syntax for type boolean: "15"* 5 2024-01-16 16:17:01.279 +0530
>
>


Re: Nested-Internal Functions

2024-01-16 Thread David G. Johnston
On Tuesday, January 16, 2024, Rossana Ocampos  wrote:

> *Hello ,*
>
> *I have a query about creating nested functions in PostgreSQL.*
>
> *I am currently using PostgreSQL 15 and I am trying to create a nested
> function with the following structure:*
>
> *CREATE OR REPLACE FUNCTION external_function ()*
>
> *RETURNS void AS $$*
>
> *DECLARE*
>
> *external_variable;*
>
>
>
> *-- Define the internal function*
>
> *FUNCTION internal_function ()*
>
> *RETURNS void AS $$*
>
> *DECLARE*
>
> *internal_variable INT;*
>
> *BEGIN*
>
> *-- Internal function code*
>
> *internal_variable:= 10;*
>
> *RAISE NOTICE 'Internal Variable: %', internal_variable;*
>
> *END;*
>
> *$$ LANGUAGE plpgsql;*
>
> *BEGIN*
>
> *-- External function code*
>
> *external_variable:= 5;*
>
> *RAISE NOTICE 'External variable: %', external_variable;*
>
>
>
> *-- Call internal function*
>
> *PERFORM internal_function ();*
>
> *END;*
>
> *$$ LANGUAGE plpgsql;*
>
>
>
> However, I get an error, and I can't compile the function.
>
> *Thank you very much for your help*
>
>
You did nested dollar quoting wrong; and I don’t see the word create where
you try to define the function inside the outer function.

You are probably better off just defining two functions independently
anyway, there is minimal benefit to having on function define another in
PostgreSQL, there are no closures.

David J.


Re: postgres sql assistance

2024-01-16 Thread Rob Sargent

On 1/16/24 06:00, Raul Giucich wrote:
Hi Arun, can you share the sql used for this insert. Visually it seems 
some character are affecting the data.

Best regards,
Raul


Raul, the OP attached the sq.


Re: Nested-Internal Functions

2024-01-16 Thread Laurenz Albe
On Tue, 2024-01-16 at 13:15 +0100, Rossana Ocampos wrote:
> I have a query about creating nested functions in PostgreSQL.

There are no "nested functions" in PostgreSQL.
You'd need to rewrite that to use a proper stand-alone function.

Yours,
Laurenz Albe




Re: Moving to Postgresql database

2024-01-16 Thread Adrian Klaver

On 1/16/24 00:06, Dominique Devienne wrote:
On Mon, Jan 15, 2024 at 5:17 AM veem v > wrote:


Is any key design/architectural changes should the app development
team [...], should really aware about


Hi. One of the biggest pitfall of PostgreSQL, from the app-dev perspective,
is the fact any failed statement fails the whole transaction, with 
ROLLBACK as the only recourse.


https://www.postgresql.org/docs/current/sql-savepoint.html

"SAVEPOINT establishes a new savepoint within the current transaction.

A savepoint is a special mark inside a transaction that allows all 
commands that are executed after it was established to be rolled back, 
restoring the transaction state to what it was at the time of the 
savepoint."


--
Adrian Klaver
adrian.kla...@aklaver.com





Postgres Database Service Interruption

2024-01-16 Thread Bablu Kumar Nayak
Dear PostgreSQL Team,



I am writing to inform you that our PostgreSQL database service is
currently down. We are experiencing an unexpected interruption, and we are
seeking your expertise to help us resolve this issue promptly.

We would greatly appreciate your immediate attention to this matter. If
there are specific steps we should follow or additional information you
require, please let us know as soon as possible.

Your assistance in resolving this issue is crucial, and we are confident in
your expertise to help us bring the PostgreSQL database back online.



Here are some details about the current situation:

*1) checking the status:-*

/apps/postgresdb/pgsql/bin/pg_ctl status -D /apps/postgresdb/pgsql/data
pg_ctl: no server running


*2) Starting the server*

/apps/postgresdb/pgsql/bin/pg_ctl start -D /apps/postgresdb/pgsql/data
waiting for server to start2024-01-15 11:15:08.010 GMT [] LOG:
 listening on IPv4 address "0.0.0.0", port
LOG:  listening on IPv6 address "::", port
LOG:  listening on Unix socket "/tmp/.s.PGSQL."
LOG:  database system was interrupted while in recovery at 2024-01-15
10:51:44 GMT
HINT:  This probably means that some data is corrupted and you will have to
use the last backup for recovery.
FATAL:  the database system is starting up
LOG:  database system was not properly shut down; automatic recovery in
progress
LOG:  redo starts at 0/
FATAL:  could not access status of transaction
DETAIL:  Could not read from file "pg_xact/0001" at offset 204800: Success.
CONTEXT:  WAL redo at 0/7A845458 for Transaction/COMMIT: 2023-12-30
23:26:16.017062+00
LOG:  startup process (PID 2731458) exited with exit code 1
LOG:  aborting startup due to startup process failure
LOG:  database system is shut down
 stopped waiting
pg_ctl: could not start server
Examine the log output.


Thanks
Bablu Nayak


Re: Moving to Postgresql database

2024-01-16 Thread Dominique Devienne
On Tue, Jan 16, 2024 at 5:07 PM Adrian Klaver 
wrote:

> On 1/16/24 00:06, Dominique Devienne wrote:
> > On Mon, Jan 15, 2024 at 5:17 AM veem v  > > wrote:
> > Is any key design/architectural changes should the app development
> > team [...], should really aware about
> > Hi. One of the biggest pitfall of PostgreSQL, from the app-dev
> perspective,
> > is the fact any failed statement fails the whole transaction, with
> > ROLLBACK as the only recourse.
>
> "SAVEPOINT establishes a new savepoint within the current transaction.
>

I wish it was that easy.
I've been scared away from using them, after reading a few articles...
Also, that incurs extra round trips to the server, from the extra commands.

I really wish https://github.com/lzlabs/pg_statement_rollback was built-in.
Don't make it the default, for backward compatibility, but please let me
opt-in to it, w/o an extension.

https://buttondown.email/nelhage/archive/notes-on-some-postgresql-implementation-details/
https://postgres.ai/blog/20210831-postgresql-subtransactions-considered-harmful
https://about.gitlab.com/blog/2021/09/29/why-we-spent-the-last-month-eliminating-postgresql-subtransactions/


Re: Postgres Database Service Interruption

2024-01-16 Thread Adrian Klaver

On 1/16/24 08:29, Bablu Kumar Nayak wrote:

Dear PostgreSQL Team,

I am writing to inform you that our PostgreSQL database service is 
currently down. We are experiencing an unexpected interruption, and we 
are seeking your expertise to help us resolve this issue promptly.


We would greatly appreciate your immediate attention to this matter. If 
there are specific steps we should follow or additional information you 
require, please let us know as soon as possible.


Your assistance in resolving this issue is crucial, and we are confident 
in your expertise to help us bring the PostgreSQL database back online.


I believe you are looking for this:

https://www.postgresql.org/support/professional_support/





HINT:  This probably means that some data is corrupted and you will have 
to use the last backup for recovery.


The above being the critical issue.
I'm guessing there was a hardware failure.


Thanks
Bablu Nayak





--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Postgres Database Service Interruption

2024-01-16 Thread Rob Sargent

On 1/16/24 09:29, Bablu Kumar Nayak wrote:


Dear PostgreSQL Team,

I am writing to inform you that our PostgreSQL database service is 
currently down. We are experiencing an unexpected interruption, and we 
are seeking your expertise to help us resolve this issue promptly.


We would greatly appreciate your immediate attention to this matter. 
If there are specific steps we should follow or additional information 
you require, please let us know as soon as possible.


Your assistance in resolving this issue is crucial, and we are 
confident in your expertise to help us bring the PostgreSQL database 
back online.




Explain your current env. postgres version, how it was install. 
OS/Version etc

What do you see when you "Examine the log output".


Re: Moving to Postgresql database

2024-01-16 Thread Adrian Klaver

On 1/16/24 09:04, Dominique Devienne wrote:
On Tue, Jan 16, 2024 at 5:07 PM Adrian Klaver > wrote:


On 1/16/24 00:06, Dominique Devienne wrote:
 > On Mon, Jan 15, 2024 at 5:17 AM veem v mailto:veema0...@gmail.com>
 > >> wrote:
 >     Is any key design/architectural changes should the app
development
 >     team [...], should really aware about
 > Hi. One of the biggest pitfall of PostgreSQL, from the app-dev
perspective,
 > is the fact any failed statement fails the whole transaction, with
 > ROLLBACK as the only recourse.

"SAVEPOINT establishes a new savepoint within the current transaction.


I wish it was that easy.
I've been scared away from using them, after reading a few articles...
Also, that incurs extra round trips to the server, from the extra commands.


The point was that '...  with ROLLBACK as the only recourse.' is not the 
case. There is an alternative, whether you want to use it being a 
separate question.



--
Adrian Klaver
adrian.kla...@aklaver.com





Mimic ALIAS in Postgresql?

2024-01-16 Thread Ron Johnson
Some RDBMSs have CREATE ALIAS, which allows you to refer to a table by a
different name (while also referring to it by the original name).

We have an application running on DB2/UDB which (for reasons wholly unknown
to me, and probably also to the current developer) extensively uses this
with two schemas: MTUSER and MTQRY.  For example, sometimes refer to
MTUSER.sometable and other times refer to it as MYQRY.sometable.

My goal is to present a way to migrate from UDB to PG with as few
application changes as possible.  Thus, the need to mimic aliases.

Maybe updatable views?
CREATE VIEW mtqry.sometable AS SELECT * FROM mtuser.sometable;


Re: Moving to Postgresql database

2024-01-16 Thread Dominique Devienne
On Tue, Jan 16, 2024 at 6:10 PM Adrian Klaver 
wrote:

> On 1/16/24 09:04, Dominique Devienne wrote:
> > On Tue, Jan 16, 2024 at 5:07 PM Adrian Klaver  > > wrote:
> >
> > On 1/16/24 00:06, Dominique Devienne wrote:
> >  > On Mon, Jan 15, 2024 at 5:17 AM veem v  > 
> >  > >> wrote:
> >  > Is any key design/architectural changes should the app
> > development
> >  > team [...], should really aware about
> >  > Hi. One of the biggest pitfall of PostgreSQL, from the app-dev
> > perspective,
> >  > is the fact any failed statement fails the whole transaction, with
> >  > ROLLBACK as the only recourse.
> >
> > "SAVEPOINT establishes a new savepoint within the current
> transaction.
> >
> >
> > I wish it was that easy.
> > I've been scared away from using them, after reading a few articles...
> > Also, that incurs extra round trips to the server, from the extra
> commands.
>
> The point was that '...  with ROLLBACK as the only recourse.' is not the
> case. There is an alternative, whether you want to use it being a
> separate question.
>

Technically, it's still a ROLLBACK, so that is indeed the only recourse.
But sure, I take your point, you can emulate statement-level (implicit)
rollback
via an explicit SAVEPOINT, and ROLLBACK to the savepoint instead.

But my point remains, that something like what that extension does should
be an option of PostgreSQL itself, not an extension. --DD

PS: I'd also be happy to hear why it's not, or won't be, on technical terms.


Re: Mimic ALIAS in Postgresql?

2024-01-16 Thread Adrian Klaver

On 1/16/24 09:20, Ron Johnson wrote:
Some RDBMSs have CREATE ALIAS, which allows you to refer to a table by a 
different name (while also referring to it by the original name).


We have an application running on DB2/UDB which (for reasons wholly 
unknown to me, and probably also to the current developer) extensively 
uses this with two schemas: MTUSER and MTQRY.  For example, sometimes 
refer to MTUSER.sometable and other times refer to it as MYQRY.sometable.


Just to be clear the table name  is the same in each schema, 
correct?


In other words setting search_path would only help if was set per 
session depending on which schema. you wanted to access.




My goal is to present a way to migrate from UDB to PG with as few 
application changes as possible.  Thus, the need to mimic aliases.


Maybe updatable views?
CREATE VIEW mtqry.sometable AS SELECT * FROM mtuser.sometable;



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Mimic ALIAS in Postgresql?

2024-01-16 Thread Adrian Klaver

On 1/16/24 09:20, Ron Johnson wrote:
Some RDBMSs have CREATE ALIAS, which allows you to refer to a table by a 
different name (while also referring to it by the original name).






Maybe updatable views?
CREATE VIEW mtqry.sometable AS SELECT * FROM mtuser.sometable;



Assuming sometable is the same name in both schemas then the above will 
not work as:


https://www.postgresql.org/docs/current/sql-createview.html

"The name of the view must be distinct from the name of any other
relation (table, sequence, index, view, materialized view, or foreign
table) in the same schema."

You would get a conflict with the existing table MTQRY.sometable.

Though I noticed you have both MTQRY and MYQRY referring to the same 
thing, I think.


--
Adrian Klaver
adrian.kla...@aklaver.com





Tips on troubleshooting slow DELETE (suspect cascades)

2024-01-16 Thread Jim Vanns
Hi,

I have a slow (CPU bound) DELETE statement I'm attempting to debug and I
suspect that its actually the ON DELETE CASCADE on the foreign key thats
causing it. I suspect this because the dry-run mode of the same query (a
SELECT instead of DELETE) doesn't suffer the same fate. The statement is
effectively;

# Dry mode
SELECT prune_function(timestamp);
# Destructive mode
DELETE FROM foobar p USING prune_function(timestamp) AS e WHERE p.id =
e.prune_id

The logs seem to hold no information on the progress of the statement but
the CPU is pegged at 100% for hours. The SELECT equivalent runs in under a
minute.

What I need is a way to see into this statement as it executes to confirm
my suspicion - does anyone have any tips on that?

Cheers

Jim

-- 
Jim Vanns
Principal Production Engineer
Industrial Light & Magic, London


Re: Tips on troubleshooting slow DELETE (suspect cascades)

2024-01-16 Thread Adrian Klaver

On 1/16/24 09:45, Jim Vanns wrote:

Hi,

I have a slow (CPU bound) DELETE statement I'm attempting to debug and I 
suspect that its actually the ON DELETE CASCADE on the foreign key thats 
causing it. I suspect this because the dry-run mode of the same query (a 
SELECT instead of DELETE) doesn't suffer the same fate. The statement is 
effectively;


# Dry mode
SELECT prune_function(timestamp);
# Destructive mode
DELETE FROM foobar p USING prune_function(timestamp) AS e WHERE p.id 
 = e.prune_id


The logs seem to hold no information on the progress of the statement 
but the CPU is pegged at 100% for hours. The SELECT equivalent runs in 
under a minute.


What I need is a way to see into this statement as it executes to 
confirm my suspicion - does anyone have any tips on that?


Explain:

https://www.postgresql.org/docs/current/sql-explain.html

It would also be helpful to reply with the table definitions for the 
tables. If that is not possible then at least whether there is an index 
on the FK reference in the child table(s)?




Cheers

Jim

--
Jim Vanns
Principal Production Engineer
Industrial Light & Magic, London


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: postgres sql assistance

2024-01-16 Thread Ron Johnson
"*invalid input syntax for type boolean: "15"*"

That is the problem.  You can't insert 15 into a column of type "boolean".

On Tue, Jan 16, 2024 at 7:35 AM arun chirappurath 
wrote:

> Dear all,
>
> I am an accidental postgres DBA and learning things every day. Apologies
> for my questions if not properly drafted.
>
> I am trying to load data from the temp table to the main table and catch
> the exceptions inside another table.
>
> temp table is cast with the main table data type and trying to load the
> data.
>
> temp table is below.
>
> category_name  |description
>| is_active
>
> ---+-+---
>  *Tech123212312312323233213123123123123*| Furniture and home decor
>  | true
>  *Tech123212312312323233213123123123123*| Electronic devices and
> accessories  | true
>  Elec| Books of various genres
> | *15*
>  TV  | Books
> | *12*
>  cla | Apparel and fashion accessories
> | true
>
> category name is varchar(25) and is_active is boolean in main table. So i
> should get exceptions for 1st,2nd for category_name rows and 4 and 5th rows
> for boolean. In exception table results,its only showing
>
> Exception table is below. Here instead of showing exception for value 12
> in the is_active table its showing old exception for 15 itself.. Script is
> attached,,...SQLERRM value is not getting updated for row 12..WHat could be
> the reason for this?
>
> value too long for type character varying(25) category_name 1 2024-01-16
> 16:17:01.279 +0530 value too long for type character varying(25)
> description 2 2024-01-16 16:17:01.279 +0530 invalid input syntax for type
> boolean: "15" is_active 3 2024-01-16 16:17:01.279 +0530 *invalid input
> syntax for type boolean: "15" * 4 2024-01-16 16:17:01.279 +0530 *invalid
> input syntax for type boolean: "15"* 5 2024-01-16 16:17:01.279 +0530
>
>


Re: Moving to Postgresql database

2024-01-16 Thread Ron Johnson
On Tue, Jan 16, 2024 at 12:10 PM Adrian Klaver 
wrote:

> On 1/16/24 09:04, Dominique Devienne wrote:
> > On Tue, Jan 16, 2024 at 5:07 PM Adrian Klaver  > > wrote:
> >
> > On 1/16/24 00:06, Dominique Devienne wrote:
> >  > On Mon, Jan 15, 2024 at 5:17 AM veem v  > 
> >  > >> wrote:
> >  > Is any key design/architectural changes should the app
> > development
> >  > team [...], should really aware about
> >  > Hi. One of the biggest pitfall of PostgreSQL, from the app-dev
> > perspective,
> >  > is the fact any failed statement fails the whole transaction, with
> >  > ROLLBACK as the only recourse.
> >
> > "SAVEPOINT establishes a new savepoint within the current
> transaction.
> >
> >
> > I wish it was that easy.
> > I've been scared away from using them, after reading a few articles...
> > Also, that incurs extra round trips to the server, from the extra
> commands.
>
> The point was that '...  with ROLLBACK as the only recourse.' is not the
> case. There is an alternative, whether you want to use it being a
> separate question.
>

Performance-killing alternatives are not really altternatives.


Re: Moving to Postgresql database

2024-01-16 Thread Adrian Klaver

On 1/16/24 09:23, Dominique Devienne wrote:
On Tue, Jan 16, 2024 at 6:10 PM Adrian Klaver > wrote:



Technically, it's still a ROLLBACK, so that is indeed the only recourse.


Actually ROLLBACK TO:

https://www.postgresql.org/docs/current/sql-rollback-to.html

You would get a different outcome with just a plain ROLLBACK.

But sure, I take your point, you can emulate statement-level (implicit) 
rollback

via an explicit SAVEPOINT, and ROLLBACK to the savepoint instead.

But my point remains, that something like what that extension does should
be an option of PostgreSQL itself, not an extension. --DD

PS: I'd also be happy to hear why it's not, or won't be, on technical terms.


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Moving to Postgresql database

2024-01-16 Thread Adrian Klaver

On 1/16/24 09:59, Ron Johnson wrote:


Performance-killing alternatives are not really altternatives.


Unless it is the only one that solves your problem.

--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Mimic ALIAS in Postgresql?

2024-01-16 Thread Ron Johnson
On Tue, Jan 16, 2024 at 12:40 PM Adrian Klaver 
wrote:

> On 1/16/24 09:20, Ron Johnson wrote:
> > Some RDBMSs have CREATE ALIAS, which allows you to refer to a table by a
> > different name (while also referring to it by the original name).
> >
>
> >
> > Maybe updatable views?
> > CREATE VIEW mtqry.sometable AS SELECT * FROM mtuser.sometable;
> >
>
> Assuming sometable is the same name in both schemas then the above will
> not work as:
>
> https://www.postgresql.org/docs/current/sql-createview.html
>
> "The name of the view must be distinct from the name of any other
> relation (table, sequence, index, view, materialized view, or foreign
> table) in the same schema."
>
> You would get a conflict with the existing table MTQRY.sometable.
>

> CREATE VIEW mtqry.sometable AS SELECT * FROM mtuser.sometable;

But mtqry is not the same schema as mtuser..

dba=# create schema mtuser;
CREATE SCHEMA
dba=# create schema mtqry;
CREATE SCHEMA
dba=#
dba=# create table mtuser.sometable(f1 int);
CREATE TABLE
dba=#
dba=# create view mtqry.sometable as select * from mtuser.sometable;
CREATE VIEW

But what are the down-sides that I haven't thought of?


Re: Moving to Postgresql database

2024-01-16 Thread Ron Johnson
On Tue, Jan 16, 2024 at 1:09 PM Adrian Klaver 
wrote:

> On 1/16/24 09:59, Ron Johnson wrote:
>
> > Performance-killing alternatives are not really altternatives.
>
> Unless it is the only one that solves your problem.
>

Amputating one head cures one's migraines, but nobody thinks it's a
viable solution to the problem of migraines.


> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Moving to Postgresql database

2024-01-16 Thread Jim Nasby

On 1/16/24 11:59 AM, Ron Johnson wrote:

 >      > Hi. One of the biggest pitfall of PostgreSQL, from the app-dev
 >     perspective,
 >      > is the fact any failed statement fails the whole
transaction, with
 >      > ROLLBACK as the only recourse.
 >
 >     "SAVEPOINT establishes a new savepoint within the current
transaction.
 >
 >
 > I wish it was that easy.
 > I've been scared away from using them, after reading a few
articles...
 > Also, that incurs extra round trips to the server, from the extra
commands.

The point was that '...  with ROLLBACK as the only recourse.' is not
the
case. There is an alternative, whether you want to use it being a
separate question.


Performance-killing alternatives are not really altternatives.


What's the actual performance issue here?

I'm also wondering what the use case for constantly retrying errors is.
--
Jim Nasby, Data Architect, Austin TX





Re: What should I expect when creating many logical replication slots?

2024-01-16 Thread Jim Nasby

On 1/11/24 6:17 PM, Antonin Bas wrote:

Hi all,

I have a use case for which I am considering using Postgres Logical 
Replication, but I would like to scale up to 100 or even 200 
replication slots.


I have increased max_wal_senders and max_replication_slots to 100 (also 
making sure that max_connections is large enough). Things seem to be 
working pretty well so far based on some PoC code I have written. 
Postgres is creating a walsender process for each replication slot, as 
expected, and the memory footprint of each one is around 4MB.


So I am quite happy with the way things are working, but I am a bit 
uneasy about increasing these configuration values by 10-20x compared to 
their defaults (both max_wal_senders and max_replication_slots default 
to 10).


Is there anything I should be looking out for specifically? Is it 
considered an anti-pattern to use that many replication slots and 
walsender processes? And, when my database comes under heavy write load, 
will walsender processes start consuming a large amount of CPU / memory 
(I recognize that this is a vague question, I am still working on some 
empirical testing).


The biggest issue with logical decoding (what drives logical 
replication) is that every subscriber has to completely decode 
everything for it's publication, which can be extremely memory intensive 
under certain circumstances (long running transacitons being one 
potential trigger). Decoders also have to read through all WAL traffic, 
regardless of what their publication is set to - everything runs of the 
single WAL stream.


Note that this only applies to actually decoding - simply having a large 
number of slots isn't much of an issue. Even having a large number of 
subscribers that aren't consuming isn't a resource issue (though it IS 
an issue for MVCC / vacuuming!) - to test you need to have all the 
decoders that you expect to support.


Ultimately, I'd be concerned with trying to support 100+ slots unless 
you know that your change rate isn't super high and that you don't have 
long-running transactions.

--
Jim Nasby, Data Architect, Austin TX





pg_dump Running Slow

2024-01-16 Thread Yongye Serkfem
Hello Engineers,
I trust you are all doing well. I need help on how to improve the speed of
pg_dump. I took a pg_dump on a 1TB database, which took almost a whole day.
I used this command: "pg_dump -U postgres -d dynamic -f /backups/." Also,
how do I check on the progression of the process?

Sincerely your
Yongye Serkfem


Re: How to redirect output from PostgreSQL pg_recvlogical to a file or a pipe?

2024-01-16 Thread Jim Nasby

On 1/13/24 3:34 PM, David Ventimiglia wrote:

The business problem I'm trying to solve is:

"How do I capture logical decoding events with the wal2json output 
encoder, filter them with jq, and pipe them to psql, using pg_recvlogical?"


I think the missing piece here is that you can't simply pipe JSON into 
psql and expect anything useful to happen. Are you using jq to turn the 
JSON into actual SQL statements? What does some of your jq output look like?

--
Jim Nasby, Data Architect, Austin TX





Re: How to redirect output from PostgreSQL pg_recvlogical to a file or a pipe?

2024-01-16 Thread David Ventimiglia
Thanks for the reply, Jim.  No, I'm afraid that's not the missing piece.  I
knew enough to use jq to transform the JSON output into SQL statements.
What I didn't know enough was about jq.  No, the missing piece turned out
not to have anything to do with PostgreSQL or pg_recvlogical (I guessed
incorrectly that it might), but rather with jq itself.  I didn't realize
that jq buffers its input and it turns out all I had to do was use its
--unbuffered switch.  The full chapter-and-verse is described in this Stack
Overflow question and answer

.

Cheers,
David

On Tue, Jan 16, 2024 at 12:57 PM Jim Nasby  wrote:

> On 1/13/24 3:34 PM, David Ventimiglia wrote:
> > The business problem I'm trying to solve is:
> >
> > "How do I capture logical decoding events with the wal2json output
> > encoder, filter them with jq, and pipe them to psql, using
> pg_recvlogical?"
>
> I think the missing piece here is that you can't simply pipe JSON into
> psql and expect anything useful to happen. Are you using jq to turn the
> JSON into actual SQL statements? What does some of your jq output look
> like?
> --
> Jim Nasby, Data Architect, Austin TX
>
>


Re: How to redirect output from PostgreSQL pg_recvlogical to a file or a pipe?

2024-01-16 Thread David Ventimiglia
Whoops!  Wrong SO link.  Here's the correct SO link:

https://stackoverflow.com/questions/77808615/how-to-use-logical-decoding-with-pg-recvlogical-to-pass-changes-through-a-non-tr

On Tue, Jan 16, 2024 at 1:15 PM David Ventimiglia <
davidaventimig...@hasura.io> wrote:

> Thanks for the reply, Jim.  No, I'm afraid that's not the missing piece.
> I knew enough to use jq to transform the JSON output into SQL statements.
> What I didn't know enough was about jq.  No, the missing piece turned out
> not to have anything to do with PostgreSQL or pg_recvlogical (I guessed
> incorrectly that it might), but rather with jq itself.  I didn't realize
> that jq buffers its input and it turns out all I had to do was use its
> --unbuffered switch.  The full chapter-and-verse is described in this
> Stack Overflow question and answer
> 
> .
>
> Cheers,
> David
>
> On Tue, Jan 16, 2024 at 12:57 PM Jim Nasby  wrote:
>
>> On 1/13/24 3:34 PM, David Ventimiglia wrote:
>> > The business problem I'm trying to solve is:
>> >
>> > "How do I capture logical decoding events with the wal2json output
>> > encoder, filter them with jq, and pipe them to psql, using
>> pg_recvlogical?"
>>
>> I think the missing piece here is that you can't simply pipe JSON into
>> psql and expect anything useful to happen. Are you using jq to turn the
>> JSON into actual SQL statements? What does some of your jq output look
>> like?
>> --
>> Jim Nasby, Data Architect, Austin TX
>>
>>


Re: pg_dump Running Slow

2024-01-16 Thread Adrian Klaver


On 1/16/24 10:55 AM, Yongye Serkfem wrote:

Hello Engineers,
I trust you are all doing well. I need help on how to improve the 
speed of pg_dump. I took a pg_dump on a 1TB database, which took 
almost a whole day. I used this command: "pg_dump -U postgres -d 
dynamic -f /backups/." Also, how do I check on the progression of the 
process?



https://www.postgresql.org/docs/current/app-pgdump.html

"
||
|j /|njobs|/|
|--jobs=/|njobs|/|

   Run the dump in parallel by dumping /|njobs|/ tables simultaneously.
   This option may reduce the time needed to perform the dump but it
   also increases the load on the database server. You can only use
   this option with the directory output format because this is the
   only output format where multiple processes can write their data at
   the same time.

   pg_dump will open /|njobs|/ + 1 connections to the database, so make
   sure your max_connections
   

   setting is high enough to accommodate all connections.

<...>

"


"

|-v|
|--verbose|

   Specifies verbose mode. This will cause pg_dump to output detailed
   object comments and start/stop times to the dump file, and progress
   messages to standard error. Repeating the option causes additional
   debug-level messages to appear on standard error.||

   |"|



Sincerely your
Yongye Serkfem

Re: Mimic ALIAS in Postgresql?

2024-01-16 Thread Adrian Klaver


On 1/16/24 10:11 AM, Ron Johnson wrote:
On Tue, Jan 16, 2024 at 12:40 PM Adrian Klaver 
 wrote:


On 1/16/24 09:20, Ron Johnson wrote:
> Some RDBMSs have CREATE ALIAS, which allows you to refer to a
table by a
> different name (while also referring to it by the original name).
>

>
> Maybe updatable views?
> CREATE VIEW mtqry.sometable AS SELECT * FROM mtuser.sometable;
>

Assuming sometable is the same name in both schemas then the above
will
not work as:

https://www.postgresql.org/docs/current/sql-createview.html

"The name of the view must be distinct from the name of any other
relation (table, sequence, index, view, materialized view, or foreign
table) in the same schema."

You would get a conflict with the existing table MTQRY.sometable.


> CREATE VIEW mtqry.sometable AS SELECT * FROM mtuser.sometable;
But mtqry is not the same schema as mtuser..

dba=# create schema mtuser;
CREATE SCHEMA
dba=# create schema mtqry;
CREATE SCHEMA
dba=#
dba=# create table mtuser.sometable(f1 int);
CREATE TABLE
dba=#
dba=# create view mtqry.sometable as select * from mtuser.sometable;
CREATE VIEW

But what are the down-sides that I haven't thought of?



What happened to the MYQRY schema in your OP?

In the above you still have a relation with the same name in different 
schema.


How does that change the issue?




Re: Mimic ALIAS in Postgresql?

2024-01-16 Thread Ron Johnson
On Tue, Jan 16, 2024 at 2:24 PM Adrian Klaver 
wrote:

>
> On 1/16/24 10:11 AM, Ron Johnson wrote:
>
> On Tue, Jan 16, 2024 at 12:40 PM Adrian Klaver 
> wrote:
>
>> On 1/16/24 09:20, Ron Johnson wrote:
>> > Some RDBMSs have CREATE ALIAS, which allows you to refer to a table by
>> a
>> > different name (while also referring to it by the original name).
>> >
>>
>> >
>> > Maybe updatable views?
>> > CREATE VIEW mtqry.sometable AS SELECT * FROM mtuser.sometable;
>> >
>>
>> Assuming sometable is the same name in both schemas then the above will
>> not work as:
>>
>> https://www.postgresql.org/docs/current/sql-createview.html
>>
>> "The name of the view must be distinct from the name of any other
>> relation (table, sequence, index, view, materialized view, or foreign
>> table) in the same schema."
>>
>> You would get a conflict with the existing table MTQRY.sometable.
>>
>
> > CREATE VIEW mtqry.sometable AS SELECT * FROM mtuser.sometable;
>
> But mtqry is not the same schema as mtuser..
>
> dba=# create schema mtuser;
> CREATE SCHEMA
> dba=# create schema mtqry;
> CREATE SCHEMA
> dba=#
> dba=# create table mtuser.sometable(f1 int);
> CREATE TABLE
> dba=#
> dba=# create view mtqry.sometable as select * from mtuser.sometable;
> CREATE VIEW
>
> But what are the down-sides that I haven't thought of?
>
>
> What happened to the MYQRY schema in your OP?
>
?


> In the above you still have a relation with the same name in different
> schema.
>

Yes.  That's the whole point.


> How does that change the issue?
>
I'm asking how to mimic table aliases, where a table is in MTUSER, but --
via the mechanism of aliases -- can be referenced from schema MTQRY.


Re: Mimic ALIAS in Postgresql?

2024-01-16 Thread Adrian Klaver



On 1/16/24 11:40 AM, Ron Johnson wrote:
On Tue, Jan 16, 2024 at 2:24 PM Adrian Klaver 
 wrote:



On 1/16/24 10:11 AM, Ron Johnson wrote:

On Tue, Jan 16, 2024 at 12:40 PM Adrian Klaver
 wrote:

On 1/16/24 09:20, Ron Johnson wrote:
> Some RDBMSs have CREATE ALIAS, which allows you to refer to
a table by a
> different name (while also referring to it by the original
name).
>

>
> Maybe updatable views?
> CREATE VIEW mtqry.sometable AS SELECT * FROM mtuser.sometable;
>

Assuming sometable is the same name in both schemas then the
above will
not work as:

https://www.postgresql.org/docs/current/sql-createview.html

"The name of the view must be distinct from the name of any other
relation (table, sequence, index, view, materialized view, or
foreign
table) in the same schema."

You would get a conflict with the existing table MTQRY.sometable.


> CREATE VIEW mtqry.sometable AS SELECT * FROM mtuser.sometable;
But mtqry is not the same schema as mtuser..

dba=# create schema mtuser;
CREATE SCHEMA
dba=# create schema mtqry;
CREATE SCHEMA
dba=#
dba=# create table mtuser.sometable(f1 int);
CREATE TABLE
dba=#
dba=# create view mtqry.sometable as select * from mtuser.sometable;
CREATE VIEW

But what are the down-sides that I haven't thought of?



What happened to the MYQRY schema in your OP?

?



"... with two schemas: MTUSER and MTQRY. For example, sometimes refer to
MTUSER.sometable and other times refer to it as MYQRY.sometable.

"


Now if MYQRY.sometable in your OP is in fact MTQRY.sometable then this:

create view mtqry.sometable as select * from mtuser.sometable;


is not going to work as a view cannot have the same name as a table in 
the same schema.





Re: Why scan all columns when we select distinct c1?

2024-01-16 Thread Jim Nasby

On 1/14/24 9:46 AM, Tom Lane wrote:

Ron Johnson  writes:

You can't scan just one column of a row-oriented table.


Technically you kinda can, depending on your definition of "scan".


The real question is why it mentions c2.

The planner did that so that the SeqScan step doesn't have to
perform a projection: it can just return (a pointer to)
the physical tuple it found in the table, without doing extra
work to form a tuple containing only c1.


The piece of info that's not mentioned here is how tuples (rows) are 
actually processed to extract individual datums (columns). The full 
details are in heap_deform_tuple() in backend/access/common/heaptuple.c, 
but the general gist is that (ignoring nulls) to read a tuple the code 
has to go datum by datum, computing the size of each datum to determine 
the physical location of the *next* datum. So if you want the 3rd datum 
in a tuple, you need to calculate the size of the 1st datum to see where 
the 2nd datum lives, and then compute the size of the 2nd datum to see 
where the 3rd one lives.


In this example, if c1 is literally the first column in the table, then 
heap_deform_tuple is free to ignore everything else in the tuple, so 
long as the code calling heap_deform_tuple() knows to ask for only 1 datum.


If all that sounds kinda expensive and tedious: you're right, it is, and 
it's why deforming tuples is generally done as late as possible. Based 
on what Tom's saying, in 9.6+ the HashAggregate code would be calling 
heap_deform_tuple(), and I'd expect it to only be retrieving c1. Without 
going through all the code, I think what's happening in 9.4 is the 
projection ends up calling heap_deform_tuple instead of the HashAgg 
code. It's still only grabbing c1; it's just doing it sooner rather than 
later. In this particular case I don't think it'd make much difference, 
but in more complicated queries it could certainly have a noticable effect.


Either way, the explain output is kinda confusing. IMO it'd be more 
accurate if it said something like "Output: t1". And Ron's main point 
that you're going to be reading an entire row of t1 from the OS is also 
true.


BTW, there's another place where the code waits as long as possible to 
access actual data in the hopes of avoiding needless work and that's 
values that have been TOASTed. heap_deform_tuple() doesn't actually need 
to de-toast data, so it will simply return a Datum that is a "pointer" 
(not a C pointer) to the toasted data. That will only be detoasted if 
something actually needs the actual data. In some cases that be a big 
performance win.

--
Jim Nasby, Data Architect, Austin TX





Re: Add support for data change delta tables

2024-01-16 Thread Jim Nasby

On 1/15/24 4:40 AM, Pavel Stehule wrote:
PostgreSQL is not a project where somebody says, you will do this, and 
you will do this, in this and this priority. There are a lot of nice 
features from standard - SQL standard is big, but a) you have to have 
funds to pay the developers, b) you have to have developers with free 
time who want to do free work. Sure, you can write a proposal, but 
without a) money or b) your hard work is not a big possibility so your 
proposal will be implemented in the next few years. With the patch the 
situation is different. Then developers can see, so it is possible to 
implement, and how much code (and work) needs it.


And in this scenario, the big thing that a patch provides is clarity on 
what you're actually trying to accomplish. Your original ask is frankly 
as clear as mud to me - I have no clue what the syntax you showed is 
supposed to be doing.


If you were to ask about something concrete in plain English (vs some 
unspecified SQL syntax) then you'd be much more likely to at least get 
some kind of response about your idea. It's certainly possible to 
convince people on -hackers to work on something that you'd like to see 
done, but the base prerequisite for that is you have to communicate what 
you're actually looking for.

--
Jim Nasby, Data Architect, Austin TX





Re: data migration using EXTENSION tds_fdw

2024-01-16 Thread Jim Nasby

On 1/15/24 1:17 PM, Adrian Klaver wrote:


I don't use tds_fdw, so the best I can do is point you at information

Hopefully someone with practical experience will chime in with more 
specific information.


The information I could find is:

https://www.freetds.org/userguide/configs.html#Localization


One other UTF8 "gotcha" in Postgres that may or may not be relevant 
here: UTF8 considers the value 0 (as in a 0 byte; not the glyph "0") to 
be a legit character, but for various reasons Postgres can't actually 
support that, so trying to insert into a text/varchar/char field in PG 
will give you an error (not sure what exact error you'd get though).

--
Jim Nasby, Data Architect, Austin TX





Re: postgres sql assistance

2024-01-16 Thread Jim Nasby

On 1/16/24 6:34 AM, arun chirappurath wrote:
I am trying to load data from the temp table to the main table and catch 
the exceptions inside another table.


I don't have a specific answer, but do have a few comments:

- There are much easier ways to do this kind of data load. Search for 
"postgres data loader" on google.


- When you're building your dynamic SQL you almost certainly should have 
some kind of ORDER BY on the queries pulling data from 
information_schema. SQL never mandates data ordering except when you 
specifically use ORDER BY, so the fact that your fields are lining up 
right now is pure luck.


- EXCEPTION WHEN others is kinda dangerous, because it traps *all* 
errors. It's much safer to find the exact error code. An easy way to do 
that in psql is \errverbose [1]. In this particular case that might not 
work well since there's a bunch of different errors you could get that 
are directly related to a bad row of data. BUT, there's also a bunch of 
errors you could get that have nothing whatsoever to do with the data 
you're trying to load (like if there's a bug in your code that's 
building the INSERT statement).


- You should look at the other details you can get via GET STACKED 
DIAGNOSTICS [2]. As far as I can tell, your script as-written will 
always return the first column in the target table. Instead you should 
use COLUMN_NAME. Note that not every error will set that though.


1: 
https://www.postgresql.org/docs/current/app-psql.html#APP-PSQL-META-COMMAND-ERRVERBOSE
2: 
https://www.postgresql.org/docs/current/plpgsql-control-structures.html#PLPGSQL-EXCEPTION-DIAGNOSTICS

--
Jim Nasby, Data Architect, Austin TX





Re: pg_dump Running Slow

2024-01-16 Thread Jim Nasby

On 1/16/24 12:55 PM, Yongye Serkfem wrote:

Hello Engineers,
I trust you are all doing well. I need help on how to improve the speed 
of pg_dump. I took a pg_dump on a 1TB database, which took almost a 
whole day. I used this command: "pg_dump -U postgres -d dynamic -f 
/backups/." Also, how do I check on the progression of the process?


The advantage to pg_dump is that it's very simple and you'd have to work 
really hard to end up with a backup that won't restore. As such, I 
*always* recommend that it be *part* of your disaster recovery plan. It 
also directly supports only restoring one table.


The flip-side is that all you get from pg_dump is (in one form or 
another) just a bunch of SQL that you'll then have to execute to 
restore. That process can be quite slow for anything but a very small 
cluster. (And as you've seen, pg_dump itself can be pretty slow.) While 
--jobs allows parallelizing that work, it's still a very expensive process.


For clusters of any significant size you'll also want to look at some 
form of binary backup (aka: PITR) [1]. Since that operates directly at 
the OS level it's much faster than pg_dump on large clusters. The 
downside is there are lots of subtle ways to set it up incorrectly, 
resulting in backups that won't restore. Instead of trying to manage it 
by hand, I strongly recommend using a tool that's meant for managing 
Postgres binary backups. pgBackRest[2] and barman[3] are two popular 
choices for that.


Most importantly: ALWAYS TEST YOUR BACKUPS. As I mentioned, it's pretty 
hard to screw up pg_dump, but you don't want to be trying to figure out 
how to restore in the middle of a disaster. Given all the subtlties 
involved with PITR I would actually recommend you test restoring *every* 
base backup you take, especially if you're not using one of the tools to 
manage it.


BTW, one of the biggest advantages to those tools is that a lot of 
people use them, so any bugs (in the tool or in Postgres itself) are 
more likely to be found and eliminated. It's also a big advantage for 
cloud providers (as well as tools like PGO[4]): with many thousands of 
instances running regular backups it's harder for bugs to remain hidden.


1: https://www.postgresql.org/docs/current/continuous-archiving.html
2: https://pgbackrest.org/
3: https://pgbarman.org/
4: https://github.com/CrunchyData/postgres-operator
--
Jim Nasby, Data Architect, Austin TX





Re: Mimic ALIAS in Postgresql?

2024-01-16 Thread Jim Nasby

On 1/16/24 11:20 AM, Ron Johnson wrote:
Some RDBMSs have CREATE ALIAS, which allows you to refer to a table by a 
different name (while also referring to it by the original name).


We have an application running on DB2/UDB which (for reasons wholly 
unknown to me, and probably also to the current developer) extensively 
uses this with two schemas: MTUSER and MTQRY.  For example, sometimes 
refer to MTUSER.sometable and other times refer to it as MYQRY.sometable.


My goal is to present a way to migrate from UDB to PG with as few 
application changes as possible.  Thus, the need to mimic aliases.


Maybe updatable views?
CREATE VIEW mtqry.sometable AS SELECT * FROM mtuser.sometable;


Based on the schema names one possibility is that the aliases are there 
as a pseudo-api between people/tools writing queries and the base 
tables. IE: if you needed to make a (maybe backwards-incompatible) 
change to "sometable" you now at least have the option of creating a 
MTQRY.sometable *view* that hides whatever change you're making to 
MTUSER.sometable.


In any case, yes, an updatable view would provide equivalent behavior in 
Postgres.

--
Jim Nasby, Data Architect, Austin TX





replication not replicating

2024-01-16 Thread Brad White
Errors from the primary

2024-01-15 00:00:51.157 CST [2660] ERROR:  requested WAL segment
0001000200A2 has already been removed
2024-01-15 00:00:51.157 CST [2660] STATEMENT:  START_REPLICATION 2/A200
TIMELINE 1
2024-01-15 00:00:56.158 CST [492] ERROR:  requested WAL segment
0001000200A2 has already been removed
2024-01-15 00:00:56.158 CST [492] STATEMENT:  START_REPLICATION 2/A200
TIMELINE 1
2024-01-15 00:00:56.159 CST [492] LOG:  could not receive data from client:
An existing connection was forcibly closed by the remote host.


2024-01-15 00:01:01.151 CST [3404] ERROR:  requested WAL segment
0001000200A2 has already been removed
2024-01-15 00:01:01.151 CST [3404] STATEMENT:  START_REPLICATION 2/A200
TIMELINE 1
2024-01-15 00:01:01.152 CST [3404] LOG:  could not receive data from
client: An existing connection was forcibly closed by the remote host.

[repeat for 55000 lines]


Errors from the hot backup at roughly the same time

2024-01-15 01:13:57.893 CST [2988] LOG:  started streaming WAL from primary
at 2/A200 on timeline 1
2024-01-15 01:13:57.893 CST [2988] FATAL:  could not receive data from WAL
stream: ERROR:  requested WAL segment 0001000200A2 has already
been removed
2024-01-15 01:13:57.893 CST [1792] LOG:  waiting for WAL to become
available at 2/A2002000
2024-01-15 01:14:02.884 CST [2552] LOG:  started streaming WAL from primary
at 2/A200 on timeline 1
2024-01-15 01:14:02.884 CST [2552] FATAL:  could not receive data from WAL
stream: ERROR:  requested WAL segment 0001000200A2 has already
been removed
2024-01-15 01:14:02.884 CST [1792] LOG:  waiting for WAL to become
available at 2/A2002000

[repeat for 49000 lines]


Parameter value in RDS

2024-01-16 Thread Atul Kumar
Hi,

I am new to RDS postgres, I have version 14 running on it with m7g.large

I found that lots of parameters has DBInstanceClassMemory written, so what
exactly is the value of this variable ?

How should I calculate it?


Regards.


Re: Parameter value in RDS

2024-01-16 Thread David G. Johnston
On Tuesday, January 16, 2024, Atul Kumar  wrote:

> Hi,
>
> I am new to RDS postgres, I have version 14 running on it with m7g.large
>
> I found that lots of parameters has DBInstanceClassMemory written, so what
> exactly is the value of this variable ?
>
> How should I calculate it?
>
>
IIRC it’s the amount of RAM on your instance.  You look it up in a table
usually.  Or check the web console.

David J.


Re: Parameter value in RDS

2024-01-16 Thread Bruce Momjian
On Wed, Jan 17, 2024 at 02:42:16AM +0530, Atul Kumar wrote:
> Hi,
> 
> I am new to RDS postgres, I have version 14 running on it with m7g.large
> 
> I found that lots of parameters has DBInstanceClassMemory written, so what
> exactly is the value of this variable ?
> 
> How should I calculate it?

We can't answer RDS-specific questions here.  I suggest you ask the
vendor.

-- 
  Bruce Momjian  https://momjian.us
  EDB  https://enterprisedb.com

  Only you can decide what is important to you.




Re: Parameter value in RDS

2024-01-16 Thread Bruce Momjian
On Tue, Jan 16, 2024 at 04:19:55PM -0500, Bruce Momjian wrote:
> On Wed, Jan 17, 2024 at 02:42:16AM +0530, Atul Kumar wrote:
> > Hi,
> > 
> > I am new to RDS postgres, I have version 14 running on it with m7g.large
> > 
> > I found that lots of parameters has DBInstanceClassMemory written, so what
> > exactly is the value of this variable ?
> > 
> > How should I calculate it?
> 
> We can't answer RDS-specific questions here.  I suggest you ask the
> vendor.

Okay, it seems someone here _did_ answer the question.  :-)

-- 
  Bruce Momjian  https://momjian.us
  EDB  https://enterprisedb.com

  Only you can decide what is important to you.




Re: Parameter value in RDS

2024-01-16 Thread Philip Semanchuk



> On Jan 16, 2024, at 4:19 PM, David G. Johnston  
> wrote:
> 
> On Tuesday, January 16, 2024, Atul Kumar  wrote:
> Hi,
> 
> I am new to RDS postgres, I have version 14 running on it with m7g.large
> 
> I found that lots of parameters has DBInstanceClassMemory written, so what 
> exactly is the value of this variable ?
> 
> How should I calculate it?
> 
>  IIRC it’s the amount of RAM on your instance.  You look it up in a table 
> usually.  Or check the web console.


Yes, I’m pretty sure that it’s the amount of RAM *in bytes*. 

Although I like the flexibility of having that expressed in a variable, it 
wan’t always as easy to use as I wanted it to be. Specifically, I found that 
differences in units made it a little difficult to figure out how AWS was 
configuring things. 

For example, on an old instance we had, shared_buffers was defined as 
{DBInstanceClassMemory/10922}. It took me a while to figure out that that 
translates to “calculate 75% of available memory, and express that value in 8Kb 
blocks”. How? Well, 10922 = 8 * 1024 * 1.3. shared_buffers is expressed in 
8Kb blocks, so converting from units of bytes (DBInstanceClassMemory) to 8kB 
blocks (shared_buffers) requires dividing by 8 * 1024. And dividing by 1. 
is the same as multiplying by 3/4, which is 75%. 

This may have been explained in AWS documentation but I couldn’t find it at the 
time and it took some work on my part to figure out the logic behind 10922 and 
some other config magic numbers. Maybe this will save you some time.

Cheers
Philip



replication isn't replicating

2024-01-16 Thread Brad White
Errors from the Primary server

2024-01-15 00:01:06.166 CST [1428] ERROR:  requested WAL segment
0001000200A2 has already been removed
2024-01-15 00:01:06.166 CST [1428] STATEMENT:  START_REPLICATION 2/A200
TIMELINE 1
2024-01-15 00:01:11.158 CST [3472] ERROR:  requested WAL segment
0001000200A2 has already been removed
2024-01-15 00:01:11.158 CST [3472] STATEMENT:  START_REPLICATION 2/A200
TIMELINE 1
2024-01-15 00:01:11.158 CST [3472] LOG:  could not receive data from
client: An existing connection was forcibly closed by the remote host.


2024-01-15 00:01:16.166 CST [664] ERROR:  requested WAL segment
0001000200A2 has already been removed
2024-01-15 00:01:16.166 CST [664] STATEMENT:  START_REPLICATION 2/A200
TIMELINE 1
2024-01-15 00:01:21.161 CST [2016] ERROR:  requested WAL segment
0001000200A2 has already been removed
2024-01-15 00:01:21.161 CST [2016] STATEMENT:  START_REPLICATION 2/A200
TIMELINE 1
2024-01-15 00:01:21.161 CST [2016] LOG:  could not receive data from
client: An existing connection was forcibly closed by the remote host.

[repeat for 55 lines]

Errors from the backup server

2024-01-15 01:13:57.893 CST [2988] LOG:  started streaming WAL from primary
at 2/A200 on timeline 1
2024-01-15 01:13:57.893 CST [2988] FATAL:  could not receive data from WAL
stream: ERROR:  requested WAL segment 0001000200A2 has already
been removed
2024-01-15 01:13:57.893 CST [1792] LOG:  waiting for WAL to become
available at 2/A2002000
2024-01-15 01:14:02.884 CST [2552] LOG:  started streaming WAL from primary
at 2/A200 on timeline 1
2024-01-15 01:14:02.884 CST [2552] FATAL:  could not receive data from WAL
stream: ERROR:  requested WAL segment 0001000200A2 has already
been removed
2024-01-15 01:14:02.884 CST [1792] LOG:  waiting for WAL to become
available at 2/A2002000

[repeat for 49000 lines]

What's my next step?

Thanks,
Brad.


Re: replication isn't replicating

2024-01-16 Thread Brad White
Sorry for the repeat. It looked like it hadn't been sent. 😔

>


Re: replication isn't replicating

2024-01-16 Thread Emanuel Calvo
El mar, 16 ene 2024 a las 22:47, Brad White () escribió:

> Errors from the Primary server
>
> 2024-01-15 00:01:06.166 CST [1428] ERROR:  requested WAL segment
> 0001000200A2 has already been removed
> 2024-01-15 00:01:06.166 CST [1428] STATEMENT:  START_REPLICATION
> 2/A200 TIMELINE 1
> 2024-01-15 00:01:11.158 CST [3472] ERROR:  requested WAL segment
> 0001000200A2 has already been removed
> 2024-01-15 00:01:11.158 CST [3472] STATEMENT:  START_REPLICATION
> 2/A200 TIMELINE 1
> 2024-01-15 00:01:11.158 CST [3472] LOG:  could not receive data from
> client: An existing connection was forcibly closed by the remote host.
>
>
These log entries mean that some node is requesting a WAL segment that was
already removed from
the server.



>
> 2024-01-15 00:01:16.166 CST [664] ERROR:  requested WAL segment
> 0001000200A2 has already been removed
> 2024-01-15 00:01:16.166 CST [664] STATEMENT:  START_REPLICATION 2/A200
> TIMELINE 1
> 2024-01-15 00:01:21.161 CST [2016] ERROR:  requested WAL segment
> 0001000200A2 has already been removed
> 2024-01-15 00:01:21.161 CST [2016] STATEMENT:  START_REPLICATION
> 2/A200 TIMELINE 1
> 2024-01-15 00:01:21.161 CST [2016] LOG:  could not receive data from
> client: An existing connection was forcibly closed by the remote host.
>
> [repeat for 55 lines]
> 
> Errors from the backup server
>
> 2024-01-15 01:13:57.893 CST [2988] LOG:  started streaming WAL from
> primary at 2/A200 on timeline 1
> 2024-01-15 01:13:57.893 CST [2988] FATAL:  could not receive data from WAL
> stream: ERROR:  requested WAL segment 0001000200A2 has already
> been removed
> 2024-01-15 01:13:57.893 CST [1792] LOG:  waiting for WAL to become
> available at 2/A2002000
> 2024-01-15 01:14:02.884 CST [2552] LOG:  started streaming WAL from
> primary at 2/A200 on timeline 1
> 2024-01-15 01:14:02.884 CST [2552] FATAL:  could not receive data from WAL
> stream: ERROR:  requested WAL segment 0001000200A2 has already
> been removed
> 2024-01-15 01:14:02.884 CST [1792] LOG:  waiting for WAL to become
> available at 2/A2002000
>
>

These are related to the backup not finding that segment, so it means
you'll need to resync
your backup stream. I assume that you're using barman and using
https://docs.pgbarman.org/release/3.9.0/#streaming-backup .

Hope it helped.

-- 
--
Emanuel Calvo
OnGres Database Engineer | ViaDB Founder


Re: Moving to Postgresql database

2024-01-16 Thread Tom Lane
Dominique Devienne  writes:
> But sure, I take your point, you can emulate statement-level (implicit)
> rollback via an explicit SAVEPOINT, and ROLLBACK to the savepoint instead.

> But my point remains, that something like what that extension does should
> be an option of PostgreSQL itself, not an extension. --DD

> PS: I'd also be happy to hear why it's not, or won't be, on technical terms.

The reason it's not going to happen is that the community (or at least
the more senior developers) still remembers what happened the last
time we tried it.

We did implement server-side auto-rollback years ago in PG 7.3,
and it was enough of a disaster that we took it out again in 7.4.
The problem is that now you have a switch somewhere (whether a GUC
or something else, still a switch) that fundamentally changes the
transactional semantics seen by applications.  Run an application
in the wrong mode and you have a broken app.  Worse, there is an
awful lot of client-side code that now has to cope with both
behaviors.  We thought that would be okay ... well, it wasn't.
It was a mess.  It would be a bigger mess now if we were to try it
again, because there would be even more broken client code.

regards, tom lane




Re: Mimic ALIAS in Postgresql?

2024-01-16 Thread hector vass
On Tue, 16 Jan 2024, 17:21 Ron Johnson,  wrote:

> Some RDBMSs have CREATE ALIAS, which allows you to refer to a table by a
> different name (while also referring to it by the original name).
>
> We have an application running on DB2/UDB which (for reasons wholly
> unknown to me, and probably also to the current developer) extensively uses
> this with two schemas: MTUSER and MTQRY.  For example, sometimes refer to
> MTUSER.sometable and other times refer to it as MYQRY.sometable.
>
> My goal is to present a way to migrate from UDB to PG with as few
> application changes as possible.  Thus, the need to mimic aliases.
>
> Maybe updatable views?
> CREATE VIEW mtqry.sometable AS SELECT * FROM mtuser.sometable;
>


I think views will work.  Alternative might be interpose a proxy to rewrite
the SQL.  https://www.galliumdata.com/ gives you an idea of what this might
look like although could do a lite version yourself.



>


Re: Tips on troubleshooting slow DELETE (suspect cascades)

2024-01-16 Thread Tom Lane
Adrian Klaver  writes:
> On 1/16/24 09:45, Jim Vanns wrote:
>> I have a slow (CPU bound) DELETE statement I'm attempting to debug and I 
>> suspect that its actually the ON DELETE CASCADE on the foreign key thats 
>> causing it.

99% of the time, the cause is lack of an index on the foreign key's
referencing columns.  We make you have a unique index on the
referenced columns, because otherwise the FK constraint's semantics
are unclear.  But you're not required to make one on the other side.

>> What I need is a way to see into this statement as it executes to 
>> confirm my suspicion - does anyone have any tips on that?

> Explain:
> https://www.postgresql.org/docs/current/sql-explain.html

Specifically, if EXPLAIN ANALYZE shows a lot of time spent in the
enforcement trigger for the FK, this is likely what's happening.

regards, tom lane




Re: Mimic ALIAS in Postgresql?

2024-01-16 Thread Rob Sargent

On 1/16/24 10:20, Ron Johnson wrote:
Some RDBMSs have CREATE ALIAS, which allows you to refer to a table by 
a different name (while also referring to it by the original name).


We have an application running on DB2/UDB which (for reasons wholly 
unknown to me, and probably also to the current developer) extensively 
uses this with two schemas: MTUSER and MTQRY.  For example, sometimes 
refer to MTUSER.sometable and other times refer to it as MYQRY.sometable.


My goal is to present a way to migrate from UDB to PG with as few 
application changes as possible.  Thus, the need to mimic aliases.


Maybe updatable views?
CREATE VIEW mtqry.sometable AS SELECT * FROM mtuser.sometable;

Isn't it time to get rid of that debt?  A sed -i 's/MTUSER/MTQRY/g' (or 
vice versa) ends what looks to me to be a split brain problem.  All the 
sql is in git right? :)


Or perhaps you have to beef the sed up to use word boundaries just in case.




Re: replication not replicating

2024-01-16 Thread Ron Johnson
On Tue, Jan 16, 2024 at 4:10 PM Brad White  wrote:

> Errors from the primary
>
> 2024-01-15 00:00:51.157 CST [2660] ERROR:  requested WAL segment
> 0001000200A2 has already been removed
> 2024-01-15 00:00:51.157 CST [2660] STATEMENT:  START_REPLICATION
> 2/A200 TIMELINE 1
> 2024-01-15 00:00:56.158 CST [492] ERROR:  requested WAL segment
> 0001000200A2 has already been removed
> 2024-01-15 00:00:56.158 CST [492] STATEMENT:  START_REPLICATION 2/A200
> TIMELINE 1
> 2024-01-15 00:00:56.159 CST [492] LOG:  could not receive data from
> client: An existing connection was forcibly closed by the remote host.
>
>
> 2024-01-15 00:01:01.151 CST [3404] ERROR:  requested WAL segment
> 0001000200A2 has already been removed
> 2024-01-15 00:01:01.151 CST [3404] STATEMENT:  START_REPLICATION
> 2/A200 TIMELINE 1
> 2024-01-15 00:01:01.152 CST [3404] LOG:  could not receive data from
> client: An existing connection was forcibly closed by the remote host.
>
> [repeat for 55000 lines]
> 
>
> Errors from the hot backup at roughly the same time
>
> 2024-01-15 01:13:57.893 CST [2988] LOG:  started streaming WAL from
> primary at 2/A200 on timeline 1
> 2024-01-15 01:13:57.893 CST [2988] FATAL:  could not receive data from WAL
> stream: ERROR:  requested WAL segment 0001000200A2 has already
> been removed
> 2024-01-15 01:13:57.893 CST [1792] LOG:  waiting for WAL to become
> available at 2/A2002000
> 2024-01-15 01:14:02.884 CST [2552] LOG:  started streaming WAL from
> primary at 2/A200 on timeline 1
> 2024-01-15 01:14:02.884 CST [2552] FATAL:  could not receive data from WAL
> stream: ERROR:  requested WAL segment 0001000200A2 has already
> been removed
> 2024-01-15 01:14:02.884 CST [1792] LOG:  waiting for WAL to become
> available at 2/A2002000
>
> [repeat for 49000 lines]
>

What database version?
Streaming or Logical replication?
Using a slot?


Re: Mimic ALIAS in Postgresql?

2024-01-16 Thread Ron Johnson
On Tue, Jan 16, 2024 at 5:31 PM Rob Sargent  wrote:

> On 1/16/24 10:20, Ron Johnson wrote:
>
> Some RDBMSs have CREATE ALIAS, which allows you to refer to a table by a
> different name (while also referring to it by the original name).
>
> We have an application running on DB2/UDB which (for reasons wholly
> unknown to me, and probably also to the current developer) extensively uses
> this with two schemas: MTUSER and MTQRY.  For example, sometimes refer to
> MTUSER.sometable and other times refer to it as MYQRY.sometable.
>
> My goal is to present a way to migrate from UDB to PG with as few
> application changes as possible.  Thus, the need to mimic aliases.
>
> Maybe updatable views?
> CREATE VIEW mtqry.sometable AS SELECT * FROM mtuser.sometable;
>
> Isn't it time to get rid of that debt?  A sed -i 's/MTUSER/MTQRY/g' (or
> vice versa) ends what looks to me to be a split brain problem.  All the sql
> is in git right? :)
>
> Or perhaps you have to beef the sed up to use word boundaries just in case.
>

I'm not a Java web developer... 😁


Re: Help needed for the resolution of memory leak

2024-01-16 Thread Merlin Moncure
On Tue, Jan 16, 2024 at 9:10 AM Sasmit Utkarsh 
wrote:

> Hi Merlin et al.
>
> I have tried to have the above change added in the missing places. Still,
> I see the below leaks reported by the address sanitizer. Please see
> the attachments for the leak reported and the function definition updated.
> Not sure for PQexecPrepared if we call PQclear(res) for cleaning up as
> well. let me know if you need any more information
>

You have many other leaks.See code below, you are making up to thee
sequential calls to create a result before calling clear.  *All *calls
creating and returning returning PGresult have to be cleared before the
pointer is reused.

merlin



LOG_DEBUG("%s() conninfo=%s",__func__,conninfo);

if(is_shadow_db)
{
shadow_db_conn = PQconnectdb(shadow_db_conn_info);
if ( PQstatus(shadow_db_conn ) != CONNECTION_OK )
{
   LOG_ERROR("Connection to shadow database failed! %s",
PQerrorMessage(conn));
   PQfinish(shadow_db_conn);
   exit(1);
}
*res *= PQexec(shadow_db_conn, "SET bytea_output = 'escape'");
LOG_DEBUG("%s() Connection to shadow_shc_data database 
SUCCESSFUL",__func__);
// execute_stored_procedure(shadow_db_conn);
}

conn = PQconnectdb(conninfo);
if ( PQstatus(conn) != CONNECTION_OK ) {
LOG_ERROR("Connection to database failed! %s", 
PQerrorMessage(conn));
PQfinish(conn);
exit(1);
} else {
*res =* PQexec(conn, "SET bytea_output = 'escape'");
LOG_DEBUG("%s() Connection to shc_data database 
SUCCESSFUL",__func__);
}

*res *= PQexec(conn, "START TRANSACTION");
if (PQresultStatus(res) != PGRES_COMMAND_OK)
{
LOG_ERROR("START TRANSACTION failed: %s", PQerrorMessage(conn));
SQL_exit_nicely(conn,res);
}
PQclear(res);


Re: Mimic ALIAS in Postgresql?

2024-01-16 Thread Rob Sargent

On 1/16/24 15:39, Ron Johnson wrote:

On Tue, Jan 16, 2024 at 5:31 PM Rob Sargent  wrote:

On 1/16/24 10:20, Ron Johnson wrote:

Some RDBMSs have CREATE ALIAS, which allows you to refer to a
table by a different name (while also referring to it by the
original name).

We have an application running on DB2/UDB which (for reasons
wholly unknown to me, and probably also to the current developer)
extensively uses this with two schemas: MTUSER and MTQRY.  For
example, sometimes refer to MTUSER.sometable and other times
refer to it as MYQRY.sometable.

My goal is to present a way to migrate from UDB to PG with as few
application changes as possible.  Thus, the need to mimic aliases.

Maybe updatable views?
CREATE VIEW mtqry.sometable AS SELECT * FROM mtuser.sometable;


Isn't it time to get rid of that debt?  A sed -i
's/MTUSER/MTQRY/g' (or vice versa) ends what looks to me to be a
split brain problem.  All the sql is in git right? :)

Or perhaps you have to beef the sed up to use word boundaries just
in case.


I'm not a Java web developer... 😁


You need to adjust you glasses if that's what you see me as.


Re: replication not replicating

2024-01-16 Thread Brad White
On Tue, Jan 16, 2024 at 4:35 PM Ron Johnson  wrote:

> On Tue, Jan 16, 2024 at 4:10 PM Brad White  wrote:
>
>> Errors from the primary
>>
>> 2024-01-15 00:00:51.157 CST [2660] ERROR:  requested WAL segment
>> 0001000200A2 has already been removed
>> 2024-01-15 00:00:51.157 CST [2660] STATEMENT:  START_REPLICATION
>> 2/A200 TIMELINE 1
>> 2024-01-15 00:00:56.158 CST [492] ERROR:  requested WAL segment
>> 0001000200A2 has already been removed
>> 2024-01-15 00:00:56.158 CST [492] STATEMENT:  START_REPLICATION
>> 2/A200 TIMELINE 1
>> 2024-01-15 00:00:56.159 CST [492] LOG:  could not receive data from
>> client: An existing connection was forcibly closed by the remote host.
>>
>>
>> 2024-01-15 00:01:01.151 CST [3404] ERROR:  requested WAL segment
>> 0001000200A2 has already been removed
>> 2024-01-15 00:01:01.151 CST [3404] STATEMENT:  START_REPLICATION
>> 2/A200 TIMELINE 1
>> 2024-01-15 00:01:01.152 CST [3404] LOG:  could not receive data from
>> client: An existing connection was forcibly closed by the remote host.
>>
>> [repeat for 55000 lines]
>>
>>
> What database version?
>
v15


> Streaming or Logical replication?
>
IDK

> Using a slot?
>
No.

Relevant sections from the conf

--- Primary ---
#--
# WRITE-AHEAD LOG
#--

# - Settings -

wal_level = replica  # minimal, replica, or logical

 # - Archiving -

archive_mode = on # enables archiving; off, on, or always
# (change requires restart)
#archive_library = '' # library to use to archive a logfile segment
# (empty string indicates archive_command should
# be used)
archive_command = 'copy %p
"DISKSTATION\\AccessData\\Prod\\WALfiles\\%f"'
# command to use to archive a
logfile segment
archive_timeout = 0# force a logfile segment switch after this
   # number of seconds; 0 disables

primary_conninfo is set.
Everything else is commented out, using default values.

--- Backup Server ---
#--
# WRITE-AHEAD LOG
#--

# - Settings -

wal_level = replica# minimal, replica, or logical
archive_cleanup_command = 'pg_archivecleanup
DISKSTATION\\AccessData\\Prod\\WALfiles %r'

# - Archiving -

archive_mode = on

# - Archive Recovery -
#restore_command = ''


>


Re: replication isn't replicating

2024-01-16 Thread Brad White
On Tue, Jan 16, 2024 at 3:53 PM Emanuel Calvo <3man...@gmail.com> wrote:

>
> El mar, 16 ene 2024 a las 22:47, Brad White ()
> escribió:
>
>> Errors from the Primary server
>>
>> 2024-01-15 00:01:06.166 CST [1428] ERROR:  requested WAL segment
>> 0001000200A2 has already been removed
>> 2024-01-15 00:01:06.166 CST [1428] STATEMENT:  START_REPLICATION
>> 2/A200 TIMELINE 1
>> 2024-01-15 00:01:11.158 CST [3472] ERROR:  requested WAL segment
>> 0001000200A2 has already been removed
>> 2024-01-15 00:01:11.158 CST [3472] STATEMENT:  START_REPLICATION
>> 2/A200 TIMELINE 1
>> 2024-01-15 00:01:11.158 CST [3472] LOG:  could not receive data from
>> client: An existing connection was forcibly closed by the remote host.
>>
>>
> These log entries mean that some node is requesting a WAL segment that was
> already removed from
> the server.
>
> 2024-01-15 01:14:02.884 CST [2552] LOG:  started streaming WAL from
>> primary at 2/A200 on timeline 1
>> 2024-01-15 01:14:02.884 CST [2552] FATAL:  could not receive data from
>> WAL stream: ERROR:  requested WAL segment 0001000200A2 has
>> already been removed
>> 2024-01-15 01:14:02.884 CST [1792] LOG:  waiting for WAL to become
>> available at 2/A2002000
>>
>>
>
> These are related to the backup not finding that segment, so it means
> you'll need to resync
> your backup stream.
>

You pointed me in the right direction.
Turns out the files are still there, so it must be a permission issue.
pgUser has full access to the files.
Postgres is running as pgUser, except that wasn't true on the backup.
Was running as 'Network Service'.
Should be better now.

Aaaand I'm wrong.
Still getting the same errors on both servers.


Re: Mimic ALIAS in Postgresql?

2024-01-16 Thread Ron Johnson
On Tue, Jan 16, 2024 at 5:57 PM Rob Sargent  wrote:

> On 1/16/24 15:39, Ron Johnson wrote:
>
> On Tue, Jan 16, 2024 at 5:31 PM Rob Sargent  wrote:
>
>> On 1/16/24 10:20, Ron Johnson wrote:
>>
>> Some RDBMSs have CREATE ALIAS, which allows you to refer to a table by a
>> different name (while also referring to it by the original name).
>>
>> We have an application running on DB2/UDB which (for reasons wholly
>> unknown to me, and probably also to the current developer) extensively uses
>> this with two schemas: MTUSER and MTQRY.  For example, sometimes refer to
>> MTUSER.sometable and other times refer to it as MYQRY.sometable.
>>
>> My goal is to present a way to migrate from UDB to PG with as few
>> application changes as possible.  Thus, the need to mimic aliases.
>>
>> Maybe updatable views?
>> CREATE VIEW mtqry.sometable AS SELECT * FROM mtuser.sometable;
>>
>> Isn't it time to get rid of that debt?  A sed -i 's/MTUSER/MTQRY/g' (or
>> vice versa) ends what looks to me to be a split brain problem.  All the sql
>> is in git right? :)
>>
>> Or perhaps you have to beef the sed up to use word boundaries just in
>> case.
>>
>
> I'm not a Java web developer... 😁
>
>
> You need to adjust you glasses if that's what you see me as.
>

You're the one who apparently sees me as having any control over anything
except when the backups run. 😞


Re: replication not replicating

2024-01-16 Thread Ron Johnson
On Tue, Jan 16, 2024 at 6:26 PM Brad White  wrote:

> On Tue, Jan 16, 2024 at 4:35 PM Ron Johnson 
> wrote:
>
>> On Tue, Jan 16, 2024 at 4:10 PM Brad White  wrote:
>>
>>> Errors from the primary
>>>
>>> 2024-01-15 00:00:51.157 CST [2660] ERROR:  requested WAL segment
>>> 0001000200A2 has already been removed
>>> 2024-01-15 00:00:51.157 CST [2660] STATEMENT:  START_REPLICATION
>>> 2/A200 TIMELINE 1
>>> 2024-01-15 00:00:56.158 CST [492] ERROR:  requested WAL segment
>>> 0001000200A2 has already been removed
>>> 2024-01-15 00:00:56.158 CST [492] STATEMENT:  START_REPLICATION
>>> 2/A200 TIMELINE 1
>>> 2024-01-15 00:00:56.159 CST [492] LOG:  could not receive data from
>>> client: An existing connection was forcibly closed by the remote host.
>>>
>>>
>>> 2024-01-15 00:01:01.151 CST [3404] ERROR:  requested WAL segment
>>> 0001000200A2 has already been removed
>>> 2024-01-15 00:01:01.151 CST [3404] STATEMENT:  START_REPLICATION
>>> 2/A200 TIMELINE 1
>>> 2024-01-15 00:01:01.152 CST [3404] LOG:  could not receive data from
>>> client: An existing connection was forcibly closed by the remote host.
>>>
>>> [repeat for 55000 lines]
>>>
>>>
>> What database version?
>>
> v15
>
>
>> Streaming or Logical replication?
>>
> IDK
>
>> Using a slot?
>>
> No.
>
> Relevant sections from the conf
>
> --- Primary ---
>
> #--
> # WRITE-AHEAD LOG
>
> #--
>
> # - Settings -
>
> wal_level = replica  # minimal, replica, or logical
>
>  # - Archiving -
>
> archive_mode = on # enables archiving; off, on, or always
> # (change requires restart)
> #archive_library = '' # library to use to archive a logfile segment
> # (empty string indicates archive_command should
> # be used)
> archive_command = 'copy %p
> "DISKSTATION\\AccessData\\Prod\\WALfiles\\%f"'
> # command to use to archive a
> logfile segment
> archive_timeout = 0# force a logfile segment switch after this
># number of seconds; 0 disables
>
> primary_conninfo is set.
> Everything else is commented out, using default values.
>
> --- Backup Server ---
>
> #--
> # WRITE-AHEAD LOG
>
> #--
>
> # - Settings -
>
> wal_level = replica# minimal, replica, or logical
> archive_cleanup_command = 'pg_archivecleanup
> DISKSTATION\\AccessData\\Prod\\WALfiles %r'
>
> # - Archiving -
>
> archive_mode = on
>
> # - Archive Recovery -
> #restore_command = ''
>

Hmmm.  Are you log shipping?

If so, why?  Streaming Replication is sooo much easier.


Re: Mimic ALIAS in Postgresql?

2024-01-16 Thread Jim Nasby

On 1/16/24 4:57 PM, Rob Sargent wrote:

Or perhaps you have to beef the sed up to use word boundaries just
in case.


I'm not a Java web developer... 😁


You need to adjust you glasses if that's what you see me as.


Reality is that basically all modern (as in last 20 years) SQL access is 
via frameworks that all use their own language and come up with SQL 
based on that. How hard it'd be to bulk change the schema depends 
entirely on the framework.

--
Jim Nasby, Data Architect, Austin TX





Re: Mimic ALIAS in Postgresql?

2024-01-16 Thread Rob Sargent

On 1/16/24 17:03, Ron Johnson wrote:

On Tue, Jan 16, 2024 at 5:57 PM Rob Sargent  wrote:

On 1/16/24 15:39, Ron Johnson wrote:

On Tue, Jan 16, 2024 at 5:31 PM Rob Sargent
 wrote:

On 1/16/24 10:20, Ron Johnson wrote:

Some RDBMSs have CREATE ALIAS, which allows you to refer to
a table by a different name (while also referring to it by
the original name).

We have an application running on DB2/UDB which (for reasons
wholly unknown to me, and probably also to the current
developer) extensively uses this with two schemas: MTUSER
and MTQRY.  For example, sometimes refer to
MTUSER.sometable and other times refer to it as MYQRY.sometable.

My goal is to present a way to migrate from UDB to PG with
as few application changes as possible.  Thus, the need to
mimic aliases.

Maybe updatable views?
CREATE VIEW mtqry.sometable AS SELECT * FROM mtuser.sometable;


Isn't it time to get rid of that debt?  A sed -i
's/MTUSER/MTQRY/g' (or vice versa) ends what looks to me to
be a split brain problem.  All the sql is in git right? :)

Or perhaps you have to beef the sed up to use word boundaries
just in case.


I'm not a Java web developer... 😁


You need to adjust you glasses if that's what you see me as.

You're the one who apparently sees me as having any control over 
anything except when the backups run. 😞


You could lie to DEV  and say the smart people on the pg list suggest 
sed ;).  Better yet tell his PM!




Re: Mimic ALIAS in Postgresql?

2024-01-16 Thread Rob Sargent

On 1/16/24 17:39, Jim Nasby wrote:

On 1/16/24 4:57 PM, Rob Sargent wrote:

    Or perhaps you have to beef the sed up to use word boundaries just
    in case.


I'm not a Java web developer... 😁


You need to adjust you glasses if that's what you see me as.


Reality is that basically all modern (as in last 20 years) SQL access 
is via frameworks that all use their own language and come up with SQL 
based on that. How hard it'd be to bulk change the schema depends 
entirely on the framework.
Hm, it's a string /somewhere/.  The rest of this thread might be accused 
of adding to the problem.


Re: Parameter value in RDS

2024-01-16 Thread Jim Nasby

On 1/16/24 3:20 PM, Bruce Momjian wrote:

We can't answer RDS-specific questions here.  I suggest you ask the
vendor.

Okay, it seems someone here_did_  answer the question.  🙂


Yeah, but still best that people go to vendors for stuff that's not 
community. If for no other reason than one of the best ways to get 
documentation improved is for support to get tired of answering the same 
question over and over again ;p

--
Jim Nasby, Data Architect, Austin TX





Re: replication not replicating

2024-01-16 Thread Brad White
On Tue, Jan 16, 2024 at 6:24 PM Ron Johnson  wrote:

> On Tue, Jan 16, 2024 at 6:26 PM Brad White  wrote:
>
>> On Tue, Jan 16, 2024 at 4:35 PM Ron Johnson 
>> wrote:
>>
>>>
 What database version?
>>>
>> v15
>>
>>
>>> Streaming or Logical replication?
>>>
>> IDK
>>
>>> Using a slot?
>>>
>> No.
>>
>> Relevant sections from the conf
>>
>> --- Primary ---
>>
>> #--
>> # WRITE-AHEAD LOG
>>
>> #--
>>
>> # - Settings -
>>
>> wal_level = replica  # minimal, replica, or logical
>>
>>  # - Archiving -
>>
>> archive_mode = on # enables archiving; off, on, or always
>> # (change requires restart)
>> #archive_library = '' # library to use to archive a logfile
>> segment
>> # (empty string indicates archive_command should
>> # be used)
>> archive_command = 'copy %p
>> "DISKSTATION\\AccessData\\Prod\\WALfiles\\%f"'
>> # command to use to archive a
>> logfile segment
>> archive_timeout = 0# force a logfile segment switch after this
>># number of seconds; 0 disables
>>
>> primary_conninfo is set.
>> Everything else is commented out, using default values.
>>
>> --- Backup Server ---
>>
>> #--
>> # WRITE-AHEAD LOG
>>
>> #--
>>
>> # - Settings -
>>
>> wal_level = replica# minimal, replica, or logical
>> archive_cleanup_command = 'pg_archivecleanup
>> DISKSTATION\\AccessData\\Prod\\WALfiles %r'
>>
>> # - Archiving -
>>
>> archive_mode = on
>>
>> # - Archive Recovery -
>> #restore_command = ''
>>
>
> Hmmm.  Are you log shipping?
>
> If so, why?  Streaming Replication is sooo much easier.
>

I'm attempting async streaming.


Re: replication not replicating

2024-01-16 Thread Brad White
On Tue, Jan 16, 2024 at 7:33 PM Brad White  wrote:

> On Tue, Jan 16, 2024 at 6:24 PM Ron Johnson 
> wrote:
>
>> On Tue, Jan 16, 2024 at 6:26 PM Brad White  wrote:
>>
>>> On Tue, Jan 16, 2024 at 4:35 PM Ron Johnson 
>>> wrote:
>>>

> What database version?

>>> v15
>>>
>>>
 Streaming or Logical replication?

>>> IDK
>>>
 Using a slot?

>>> No.
>>>
>>> Relevant sections from the conf
>>>
>>> archive_mode = on
>>>
>>> # - Archive Recovery -
>>> #restore_command = ''
>>>
>>
>> Hmmm.  Are you log shipping?
>>
>> If so, why?  Streaming Replication is sooo much easier.
>>
>
> I'm attempting async streaming.
>
Looks like I was just missing the restore command.
Everything is looking happier now.


WAL file clean up

2024-01-16 Thread Brad White
I have the 'archive_cleanup_command' command specified, but I still have
WAL files.
The documentation seems to indicate that it will run automatically, but it
doesn't seem to be running.

  archive_cleanup_command = 'pg_archivecleanup
DISKSTATION\\AccessData\\Prod\\WALfiles %r'
   # command to execute at every restartpoint

It seems it should run every time there is a restartpoint.
Restartpoints can happen at any checkpoint in the log.
My checkpoint time out is set to 5 minutes.

  checkpoint_timeout = 5min

  Restartpoints are more likely to happen when getting closer to the size
limit.
  max_wal_size = 1GB

My folder size is now 430 files = 6.8 GB. Not terrible, but should be
enough to trigger a restartpoint.

How do I tell if I haven't had a restartpoint or I did and the command
didn't work.

No errors in the pg_log

Thanks,
Brad.


Re: postgres sql assistance

2024-01-16 Thread arun chirappurath
Hi Jim,

Thank you so much for the kind review.


Architect is pressing for a native procedure to data load.

I shall Google ans try to find more suitable one than writing one by myself.


Thanks again,
Arun

On Wed, 17 Jan, 2024, 01:58 Jim Nasby,  wrote:

> On 1/16/24 6:34 AM, arun chirappurath wrote:
> > I am trying to load data from the temp table to the main table and catch
> > the exceptions inside another table.
>
> I don't have a specific answer, but do have a few comments:
>
> - There are much easier ways to do this kind of data load. Search for
> "postgres data loader" on google.
>
> - When you're building your dynamic SQL you almost certainly should have
> some kind of ORDER BY on the queries pulling data from
> information_schema. SQL never mandates data ordering except when you
> specifically use ORDER BY, so the fact that your fields are lining up
> right now is pure luck.
>
> - EXCEPTION WHEN others is kinda dangerous, because it traps *all*
> errors. It's much safer to find the exact error code. An easy way to do
> that in psql is \errverbose [1]. In this particular case that might not
> work well since there's a bunch of different errors you could get that
> are directly related to a bad row of data. BUT, there's also a bunch of
> errors you could get that have nothing whatsoever to do with the data
> you're trying to load (like if there's a bug in your code that's
> building the INSERT statement).
>
> - You should look at the other details you can get via GET STACKED
> DIAGNOSTICS [2]. As far as I can tell, your script as-written will
> always return the first column in the target table. Instead you should
> use COLUMN_NAME. Note that not every error will set that though.
>
> 1:
>
> https://www.postgresql.org/docs/current/app-psql.html#APP-PSQL-META-COMMAND-ERRVERBOSE
> 2:
>
> https://www.postgresql.org/docs/current/plpgsql-control-structures.html#PLPGSQL-EXCEPTION-DIAGNOSTICS
> --
> Jim Nasby, Data Architect, Austin TX
>
>


Re: WAL file clean up

2024-01-16 Thread Ron Johnson
On Tue, Jan 16, 2024 at 10:03 PM Brad White  wrote:

> I have the 'archive_cleanup_command' command specified, but I still have
> WAL files.
> The documentation seems to indicate that it will run automatically, but it
> doesn't seem to be running.
>
>   archive_cleanup_command = 'pg_archivecleanup
> DISKSTATION\\AccessData\\Prod\\WALfiles %r'
># command to execute at every restartpoint
>
> It seems it should run every time there is a restartpoint.
> Restartpoints can happen at any checkpoint in the log.
> My checkpoint time out is set to 5 minutes.
>
>   checkpoint_timeout = 5min
>
>   Restartpoints are more likely to happen when getting closer to the size
> limit.
>   max_wal_size = 1GB
>

That seems pretty low.


>
> My folder size is now 430 files = 6.8 GB. Not terrible, but should be
> enough to trigger a restartpoint.
>
> How do I tell if I haven't had a restartpoint or I did and the command
> didn't work.
>

What methods are you using for replication and database backups?

Streaming replication using slots, and physical backups via, for example,
PgBackRest handles all this for you automagically.


Re: Help needed for the resolution of memory leak

2024-01-16 Thread Sasmit Utkarsh
Thanks, I'll take a look.

Regards,
Sasmit Utkarsh
+91-7674022625


On Wed, Jan 17, 2024 at 4:12 AM Merlin Moncure  wrote:

> On Tue, Jan 16, 2024 at 9:10 AM Sasmit Utkarsh 
> wrote:
>
>> Hi Merlin et al.
>>
>> I have tried to have the above change added in the missing places. Still,
>> I see the below leaks reported by the address sanitizer. Please see
>> the attachments for the leak reported and the function definition updated.
>> Not sure for PQexecPrepared if we call PQclear(res) for cleaning up as
>> well. let me know if you need any more information
>>
>
> You have many other leaks.See code below, you are making up to thee
> sequential calls to create a result before calling clear.  *All *calls
> creating and returning returning PGresult have to be cleared before the
> pointer is reused.
>
> merlin
>
>
>
>   LOG_DEBUG("%s() conninfo=%s",__func__,conninfo);
>
>   if(is_shadow_db)
>   {
>   shadow_db_conn = PQconnectdb(shadow_db_conn_info);
>   if ( PQstatus(shadow_db_conn ) != CONNECTION_OK )
>   {
>  LOG_ERROR("Connection to shadow database failed! %s", 
> PQerrorMessage(conn));
>  PQfinish(shadow_db_conn);
>  exit(1);
>   }
>   *res *= PQexec(shadow_db_conn, "SET bytea_output = 'escape'");
>   LOG_DEBUG("%s() Connection to shadow_shc_data database 
> SUCCESSFUL",__func__);
>   // execute_stored_procedure(shadow_db_conn);
>   }
>
>   conn = PQconnectdb(conninfo);
>   if ( PQstatus(conn) != CONNECTION_OK ) {
>   LOG_ERROR("Connection to database failed! %s", 
> PQerrorMessage(conn));
>   PQfinish(conn);
>   exit(1);
>   } else {
>   *res =* PQexec(conn, "SET bytea_output = 'escape'");
>   LOG_DEBUG("%s() Connection to shc_data database 
> SUCCESSFUL",__func__);
>   }
>
>   *res *= PQexec(conn, "START TRANSACTION");
>   if (PQresultStatus(res) != PGRES_COMMAND_OK)
>   {
>   LOG_ERROR("START TRANSACTION failed: %s", PQerrorMessage(conn));
>   SQL_exit_nicely(conn,res);
>   }
>   PQclear(res);
>
>