RE: Localizing stored functions by replacing placeholders in their body

2021-03-02 Thread Kevin Brannen
-Original Message-
From: Alvaro Herrera 
Sent: Tuesday, March 2, 2021 2:19 PM
To: Alexander Farber 
Cc: pgsql-general 
Subject: Re: Localizing stored functions by replacing placeholders in their body

On 2021-Mar-02, Alexander Farber wrote:

> CREATE OR REPLACE FUNCTION localize_hello()
> RETURNS text AS
> $func$
> SELECT '$(hello)';
> $func$ LANGUAGE sql IMMUTABLE;

I'm not sure this is a great approach to in-database translations: you have one 
function per string, which is cumbersome, bloated and probably slow...

---

I would agree with Alvaro and take it a step further. Perhaps you didn't do it 
this way, but many applications are split with back-end code to get & generate 
data while a set of templates is used to produce the result (HTML or whatever) 
the user sees. Many times these "template systems/toolkits" have I18N built 
into them in some way. If that's possible for you, I'd suggest investigating 
that. Overall, this seems more like an application problem and not a DB problem.

HTH,
Kevin
This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.





self-made certs not quite right

2021-03-02 Thread Rob Sargent

I'm trying to follow instrux in V12:18.9.5 Creating Certificates. [1]

I'm stuck in my basement so all references to "/CN=FQN" have been set to 
$(hostname), just the hostname, because $(domainname) returns "(none)" 
which I presume is akin to null.


With my newly minted certs and keys using psql (to either $(hostname) or 
localhost) I get the "SSL connection (protocol: TLSv1.3...) message, so 
long as I have an empty ~/.postgresql directory.  If I copy the 
generated root.crt to ~/.postgresql (chown me.me; chmod 400) I get a 
plain connection (no ssl).


With root.crt in ~/.postgresql, testing the jdbc connection from a 
tomcat server generates this failure (again either localhost or $(hostname))
Blow out on db connection to jdbc:postgresql://localhost:5432/postgres; 
SSL error: PKIX path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: unable to 
find valid certification path to requested target
org.postgresql.util.PSQLException: SSL error: PKIX path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: unable to 
find valid certification path to requested target


Is this a pkcs v. pem cat fight?  Or is there enough here to tell which 
step went south, or just start over?


[1] https://www.postgresql.org/docs/12/ssl-tcp.html




Re: Locks in creating a partition in CREATE TABLE vs ALTER TABLE

2021-03-02 Thread Alvaro Herrera
On 2021-Mar-02, Asaf Flescher wrote:

> I'm not sure if this is a bug or I'm missing something regarding how
> partitioning is supposed to work but I've noticed (in Postgres 12.6) that
> if I have a partitioned table, and then try to add a partition to it via
> CREATE TABLE ... PARTITION OF, the statement will grab an AccessExclusive
> lock on the partitioned table. Meanwhile, if I create that same table
> normally, then attach it to the partitioned table via ALTER table - no
> AccessExclusive lock.

It's a new feature in Postgres 12 actually -- we went great lengths to
be able to do ALTER TABLE .. ATTACH PARTITION without a full
AccessExclusive lock.  However, it's just not possible to do the same
for CREATE TABLE .. PARTITION AS.

If you try the same in Postgres 11, you'll notice that both use an
AccessExclusive lock.

-- 
Álvaro Herrera   Valdivia, Chile
Syntax error: function hell() needs an argument.
Please choose what hell you want to involve.




Locks in creating a partition in CREATE TABLE vs ALTER TABLE

2021-03-02 Thread Asaf Flescher
Hi,

I'm not sure if this is a bug or I'm missing something regarding how
partitioning is supposed to work but I've noticed (in Postgres 12.6) that
if I have a partitioned table, and then try to add a partition to it via
CREATE TABLE ... PARTITION OF, the statement will grab an AccessExclusive
lock on the partitioned table. Meanwhile, if I create that same table
normally, then attach it to the partitioned table via ALTER table - no
AccessExclusive lock. Short recreation -

In one session:
CREATE TABLE stuff (stuff_id int) PARTITION BY LIST (stuff_id);
BEGIN;
SELECT * FROM stuff;

Then in a second session:

CREATE TABLE stuff_1 PARTITION OF stuff FOR VALUES IN (1); (Will get stuck,
and a query on pg_locks will show an ungranted AccessExclusive lock).
CREATE TABLE stuff_1 (LIKE stuff); (Will work)
ALTER TABLE stuff ATTACH PARTITION stuff_1 FOR VALUES IN (1); (Will work)

Logically, the two approaches are doing the same thing, are they not? Or am
I missing something?

Would appreciate any advice here,
Asaf


Aw: Re: Localizing stored functions by replacing placeholders in their body

2021-03-02 Thread Karsten Hilbert
> I'm not sure this is a great approach to in-database translations: you
> have one function per string, which is cumbersome, bloated and probably
> slow.  I would suggest having a function that takes a string and returns
> its translation, which is obtained from a couple of tables: one where
> the original strings are stored and another which stores the
> translations for each string into each language.

Perhaps like so

https://www.gnumed.de/documentation/schema/release/#i18n.table.curr-lang

SQL to be found here

https://github.com/ncqgm/gnumed/tree/master/gnumed/gnumed/server/sql

Karsten




Re: Localizing stored functions by replacing placeholders in their body

2021-03-02 Thread Alvaro Herrera
On 2021-Mar-02, Alexander Farber wrote:

> CREATE OR REPLACE FUNCTION localize_hello()
> RETURNS text AS
> $func$
> SELECT '$(hello)';
> $func$ LANGUAGE sql IMMUTABLE;

I'm not sure this is a great approach to in-database translations: you
have one function per string, which is cumbersome, bloated and probably
slow.  I would suggest having a function that takes a string and returns
its translation, which is obtained from a couple of tables: one where
the original strings are stored and another which stores the
translations for each string into each language.

(You can have the target language be a property of the database, or a
custom GUC setting that the application sets at the start, or just
passed as an argument to the translate() function from somewhere.  Or
maybe each database always has exactly one language.  Whatever suits you
best.)

So the functions that your application calls return strings by doing
stuff like
  SELECT translate('one UFO came and stole one bike');
and they'll get whatever is right for them.  The functions only need to
worry about calling translate() in all the right places; they don't need
to individually worry about fetching the translation etc.

Note that in that design, the original string appears in two places: the
function source code, and the original-strings table.  You could go one
step further and have the function store a code (UUID?) for the string;
then if a message has a typo, you're just one UPDATE away from fixing it
instead of an ALTER FUNCTION.  And also, it's easy to change all
translations together if your UFOs are actually ordinary burglars.

Exercise for the reader: what if your strings have format specifiers?
  "%d UFOs came and stole %d bikes"

-- 
Álvaro Herrera39°49'30"S 73°17'W




Re: Localizing stored functions by replacing placeholders in their body

2021-03-02 Thread Alexander Farber
I think I will try this approach:

 \set localized_declaration `sed 's/this/that/' my_func.sql`
 :localized_declaration

Thank you for your input


Re: Localizing stored functions by replacing placeholders in their body

2021-03-02 Thread Alexander Farber
Ah, I understand, that was the wrong EXECUTE, thank you.

Another idea: can't I use \set command for my purpose of localizing stored
functions?

\set my_func_declaration `sed 's/this/that/' my_func.sql`

But how to execute the declaration? I can only echo it with

select (:'my_func_declaration');


Re: Localizing stored functions by replacing placeholders in their body

2021-03-02 Thread Pavel Stehule
út 2. 3. 2021 v 17:55 odesílatel Alexander Farber <
alexander.far...@gmail.com> napsal:

> Thank you for the \! hint, Pavel, didn't know about that!
>
> Is it possible to have a pure SQL solution? (To avoid having to install
> "sed" on my Win 10 PC)
>

You should to use PLpgSQL EXECUTE statement, not SQL statement

https://www.postgresql.org/docs/current/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN

But I don't know - all this is server side, and you do some things on
client side






> Maybe by using EXECUTE?
>
> EXECUTE REGEXP_REPLACE(
> $localize$
>
> CREATE OR REPLACE FUNCTION my_func()
> RETURNS text AS
> $func$
> SELECT '$(placeholder)';
> $func$ LANGUAGE sql IMMUTABLE;
>
> $localize$, '\$\(\w+\)', 'English word', 'g');
>
> Unfortunately, I get the error:
>
> ERROR:  prepared statement "regexp_replace" does not exist
>
>
>


Re: Localizing stored functions by replacing placeholders in their body

2021-03-02 Thread Alexander Farber
Thank you for the \! hint, Pavel, didn't know about that!

Is it possible to have a pure SQL solution? (To avoid having to install
"sed" on my Win 10 PC)

Maybe by using EXECUTE?

EXECUTE REGEXP_REPLACE(
$localize$

CREATE OR REPLACE FUNCTION my_func()
RETURNS text AS
$func$
SELECT '$(placeholder)';
$func$ LANGUAGE sql IMMUTABLE;

$localize$, '\$\(\w+\)', 'English word', 'g');

Unfortunately, I get the error:

ERROR:  prepared statement "regexp_replace" does not exist


Re: Localizing stored functions by replacing placeholders in their body

2021-03-02 Thread Pavel Stehule
út 2. 3. 2021 v 17:18 odesílatel Alexander Farber <
alexander.far...@gmail.com> napsal:

> Or is it possible to call external commands from an sql script, like
>
> \i "sed 's/this/that/' some.sql"
>

you can use \! statement for execution of external statements

Pavel


Re: Localizing stored functions by replacing placeholders in their body

2021-03-02 Thread Alexander Farber
Or is it possible to call external commands from an sql script, like

\i "sed 's/this/that/' some.sql"


Re: Localizing stored functions by replacing placeholders in their body

2021-03-02 Thread Alexander Farber
Yes, good point about the '\$', thank you Tom.

The reason I am trying not to use sed, is because I deploy my database by
executing a single command:

psql words_en < words_en.sql

And the file words_en.sql has the contents:

\i words_hash.sql
\i words_all_letters.sql
\i words_get_hint.sql
\i words_get_notification.sql
\i ../words_common.sql
\i words_valid_tile.sql
\i words_get_moves.sql
\i words_answer_puzzle.sql
\i words_rare_letter_1.sql
\i words_rare_letter_2.sql

And then the ../words_common.sql creates tables and has 40 more "\i" calls.

So I was hoping to have some SQL command to localize my stored functions.

Best regards
Alex


Re: Localizing stored functions by replacing placeholders in their body

2021-03-02 Thread Tom Lane
Alexander Farber  writes:
> update pg_proc set prosrc = regexp_replace(prosrc, '$\(\w+\)','Hi
> english','g') where proname='localize_hello';

"$" is a metacharacter in regexes ... writing \$ might help.

(The idea of direct updates on the system catalogs seems like a really
bad one.  Why not pass the code through sed or the like ahead of
feeding it to psql?  But you'd have to get the regex syntax right
in any case.)

regards, tom lane




Localizing stored functions by replacing placeholders in their body

2021-03-02 Thread Alexander Farber
Hello,

I have an app using PostgreSQL 13.2, in 6 different human languages (each
using different database, but same source code).

Currently to localize strings return/set by the stored functions I either
get localized strings from a table or maintain stored function source code
in 6 different languages.

This is not very comfortable and I would like to switch to using same
source code (regardless of the human language) for all stored functions.
And after deploying a database, just run few commands to replace
placeholders in the stored functions.

So I am trying:

CREATE OR REPLACE FUNCTION localize_hello()
RETURNS text AS
$func$
SELECT '$(hello)';
$func$ LANGUAGE sql IMMUTABLE;

And then:

update pg_proc set prosrc = regexp_replace(prosrc, '$\(\w+\)','Hi
english','g') where proname='localize_hello';

But the error is:

ERROR:  permission denied for table pg_proc

So I connect as user "postgres" and then the command seemingly succeeds,
but when I call it, the delivered string is still old:

 select * from localize_hello();
 localize_hello

 $(hello)
(1 row)

Is this a right approach? Do you please have any advice here?

Thanks
Alex


Re: Pgbackrest version 2.28 Bug/Issue

2021-03-02 Thread Reid Thompson


On Tue, 2021-03-02 at 14:16 +0530, Brajendra Pratap Singh wrote:
>  unable to connect to 'dbname='postgres' port=5432': could not connect to 
> server: No such file or directory
>                                         Is the server running locally and 
> accepting
>                                         connections on Unix domain socket 
> "/var/run/postgresql/.s.PGSQL.5432"?

it's telling you right here why it failed; and what to check for.
you've configured pgbackrest to connect to the local postgres db using
the unix domain socket. That socket was not present when you attempted
to run pgbackrest.






Re: Batch update million records in prd DB

2021-03-02 Thread Yi Sun
Hi Kristjan,

Thank you for this information.

"postgres the memory is slowly eaten away when doing updates within plsql
loop" for this memory issue, I want to check if it exists in our current
postgresql version. And let developer change to use python for loop also
need to show them the proof, how to reproduce and check the memory slowly
eaten away please? OS level cmd or psql cmd to verify? thanks

Kristjan Mustkivi  于2021年3月2日周二 下午5:26写道:

> Hi Yi,
>
> I found that in postgres the memory is slowly eaten away when doing
> updates within plsql loop. It only gets released once the whole block
> completes. While it is ok for small tables you will eventually run out
> of memory for really big ones. The working approach was to do the loop
> in e.g a python script that called the DML statements and also called
> commit. Several million rows is fortunately relatively small number to
> update but once you get to billions this approach would not likely
> work. Note that after each batch you also should call VACUUM before
> starting a new one to avoid significant table bloat.
>
> BR,
>
> Kristjan
>
> On Wed, Feb 24, 2021 at 3:01 PM Yi Sun  wrote:
> >
> > Hello,
> >
> > Now need to update several million records in a table in prd DB, if can
> use batch update 1000 records and commit each time, if it will affect prd
> application like below sample script please?
> >
> > Sample script:
> >
> > DO $MAIN$
> > DECLARE
> >   affect_count integer;
> >   chunk_size CONSTANT integer :=1000;
> >   sleep_sec CONSTANT numeric :=0.1;
> > BEGIN
> >
> >   loop
> >
> > exit when affect_count=0;
> >
> >   UPDATE tbl a
> >   SET name = ''
> >   WHERE a.id IN (SELECT id
> >FROM tbl b
> >WHERE name IS NULL
> >LIMIT chunk_size);
> >
> >   GET DIAGNOSTICS affect_count = ROW_COUNT;
> >
> >   commit;
> >
> >   PERFORM pg_sleep(sleep_sec);
> >
> >   end loop;
> > END;
> > $MAIN$;
> >
> > Thanks and best regards
>
>
>
> --
> Kristjan Mustkivi
>
> Email: kristjan.mustk...@gmail.com
>


Re: Batch update million records in prd DB

2021-03-02 Thread Yi Sun
Hi Michael,

Thank you, after create index to the temp table column, time cost become
smaller

Michael Lewis  于2021年3月2日周二 上午12:08写道:

> 1) Don't pretend it is a left join when your where clause will turn it
> into an INNER join.
> LEFT JOIN pol gp ON gab.policy_id = gp.id
> WHERE
>
> *AND gp.name  LIKE 'Mobile backup%'
> AND gp.deleted_at IS NOT NULL;*
>
> 2) It is interesting to me that the row estimates are stable, but the
> number of rows filtered out and that are found by those two index
> scans changes so dramatically. Is your underlying data changing
> significantly during this run? Maybe I am not seeing something that should
> be obvious.
>
> 3) What is the execution plan for the update based on the temp table? It
> is hard to believe it takes 2 seconds to update 1000 rows. By the way, that
> temp table needs to be analyzed after it is created & populated with data,
> or the planner won't know how many rows it contains or any other stats
> about it. One advantage of the temp table should be that you have already
> found all the candidate rows and so the time that locks are held to update
> the 1000 target rows is smaller. Given you are doing a order by & limit in
> the use of the temp table, I might actually create an index on the id
> column to help the later runs. The temp table should likely remain in
> memory (temp_buffers) but still, btree is nice for ordered use.
>
>>


Re: Pgbackrest version 2.28 Bug/Issue

2021-03-02 Thread David Rowley
On Tue, 2 Mar 2021 at 21:53, Brajendra Pratap Singh
 wrote:
> We are getting the below issue while executing the backup through pgbackrest.

> 2021-03-02 02:10:01.622 P00  ERROR: [056]: unable to find primary cluster - 
> cannot proceed

That's not an error that's coming from PostgreSQL.

You might have better luck asking the authors of pgbackrest. Check
their website for details on how they want issues reported.

David




Re: Batch update million records in prd DB

2021-03-02 Thread Kristjan Mustkivi
Hi Yi,

I found that in postgres the memory is slowly eaten away when doing
updates within plsql loop. It only gets released once the whole block
completes. While it is ok for small tables you will eventually run out
of memory for really big ones. The working approach was to do the loop
in e.g a python script that called the DML statements and also called
commit. Several million rows is fortunately relatively small number to
update but once you get to billions this approach would not likely
work. Note that after each batch you also should call VACUUM before
starting a new one to avoid significant table bloat.

BR,

Kristjan

On Wed, Feb 24, 2021 at 3:01 PM Yi Sun  wrote:
>
> Hello,
>
> Now need to update several million records in a table in prd DB, if can use 
> batch update 1000 records and commit each time, if it will affect prd 
> application like below sample script please?
>
> Sample script:
>
> DO $MAIN$
> DECLARE
>   affect_count integer;
>   chunk_size CONSTANT integer :=1000;
>   sleep_sec CONSTANT numeric :=0.1;
> BEGIN
>
>   loop
>
> exit when affect_count=0;
>
>   UPDATE tbl a
>   SET name = ''
>   WHERE a.id IN (SELECT id
>FROM tbl b
>WHERE name IS NULL
>LIMIT chunk_size);
>
>   GET DIAGNOSTICS affect_count = ROW_COUNT;
>
>   commit;
>
>   PERFORM pg_sleep(sleep_sec);
>
>   end loop;
> END;
> $MAIN$;
>
> Thanks and best regards



-- 
Kristjan Mustkivi

Email: kristjan.mustk...@gmail.com




Pgbackrest version 2.28 Bug/Issue

2021-03-02 Thread Brajendra Pratap Singh
Hi ,

We are getting the below issue while executing the backup through
pgbackrest.

2021-03-02 02:10:01.620 P00   INFO: backup command begin 2.28:
--archive-check --archive-copy --compress --db-timeout=7200
--log-level-console=detail --log-level-file=detail
--log-level-stderr=detail --log-path=/apayrman/pgbackrest/backup/log
--pg1-path=/apayothr/pgsql/12.3/data --process-max=12
--protocol-timeout=9000 --repo1-path=/apayrman/pgbackrest
--repo1-retention-archive=8 --repo1-retention-full=2 --stanza=uatpgdbserver
--start-fast --stop-auto
2021-03-02 02:10:01.622 P00   WARN: unable to check pg-1: [DbConnectError]
unable to connect to 'dbname='postgres' port=5432': could not connect to
server: No such file or directory
Is the server running locally and
accepting
connections on Unix domain socket
"/var/run/postgresql/.s.PGSQL.5432"?
2021-03-02 02:10:01.622 P00  ERROR: [056]: unable to find primary cluster -
cannot proceed
2021-03-02 02:10:01.622 P00   INFO: backup command end: aborted with
exception [056]

DB version :- postgresql 12.3(open source)

OS :- Centos8

Pgbackrest version :- 2.28

Pgbackrest.conf parameters :-

[uatpgdbserver]
pg1-path=/apayothr/pgsql/12.3/data
db-timeout=7200
db1-user=postgres

[global]
repo1-path=/apayrman/pgbackrest
compress=y
retention-archive=8
retention-full=2
process-max=12
log-path=/apayrman/pgbackrest/backup/log
start-fast=y
stop-auto=y
protocol-timeout=9000
archive-check=y
archive-copy=y

Please suggest the solution for above issue.

Thanks,
Singh