Re: pg_restore - generated column - not populating

2021-02-22 Thread Adrian Klaver

On 2/22/21 7:43 PM, Santosh Udupi wrote:
If I backup using pgAdmin, I am able to restore using pg_restore but for 
some reason, pg_rsestore on the output from pg_dump does not create 
values for the generated columns




To troubleshoot this:

1) Stick to one dump/restore combination. The three versions you tried 
before just confuse the issue. For instance:


pg_dump -Ft mydb > mydb.tar
pg_restore -Ft -d mydb mydb.backup

makes no sense. As mydb.backup came from:

pg_dump -C -Fc mydb > mydb.backup

I have not tested, but I'm pretty sure the pg_restore just ignored the 
-Ft and just did -Fc.


2) Big explicit in your dump and restore commands for -h(ost), -p(ort) 
and -U(ser). I suspect you may not be restoring to where you think you are.


3) Closely follow the progress of both the dump and the restore.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: pg_restore - generated column - not populating

2021-02-22 Thread Santosh Udupi
If I backup using pgAdmin, I am able to restore using pg_restore but for
some reason, pg_rsestore on the output from pg_dump does not create values
for the generated columns

On Mon, Feb 22, 2021 at 6:20 PM Santosh Udupi  wrote:

> The logs don't show errors. I came across something similar here
> https://www.postgresql-archive.org/Dumping-restoring-fails-on-inherited-generated-column-td6114378.html
>
>
> but not sure what the solution is.
>
> On Mon, Feb 22, 2021 at 5:57 PM Santosh Udupi  wrote:
>
>> I used the following commands for dump
>>
>> pg_dump -c mydb | gzip -9 > mydb.gz
>> pg_dump -C -Fc mydb > mydb.backup
>> pg_dump -Ft mydb > mydb.tar
>>
>> For restore, I created a blank database by issuing the command "createdb
>> mydb" and then tried
>>
>> gunzip -c mydb.gz | psql mydb
>> pg_restore -d mydb mydb.backup
>> pg_restore -Ft -d mydb mydb.backup
>>
>> I have tried -c, -C, schema only etc but nothing has worked so far.
>>
>> I didn't check the Postgres logs. Thanks for the suggestion. I will check
>> that.
>>
>>
>>
>> On Mon, Feb 22, 2021 at 5:16 PM Adrian Klaver 
>> wrote:
>>
>>> On 2/22/21 5:08 PM, Santosh Udupi wrote:
>>> > Hi all,
>>> >
>>> > My database has tables with generated columns. I altered a table and
>>> > added a generated column as below:
>>> >
>>> > alter table billing add primary_bill_to_id int   GENERATED ALWAYS
>>> as
>>> > ((info->>'vp')::int) stored
>>> >
>>> >
>>> > Now, when I do the pg_dump and pg_restore, this column does not get
>>> > populated. It remains null
>>> >
>>> > "Info" is the jsonb column in the table and info->>'vp' has values in
>>> > multiple rows but still generated column "primary_bill_to_id" is null
>>> > after the restore
>>> >
>>> > I am using postgres version 13
>>> >
>>> > Can you tell me what am I missing?
>>>
>>> Without the commands you used to do the dump and restore it will be
>>> difficult to come to any conclusions. It would also be helpful to look
>>> at the Postgres logs from the restore to see if there are any error
>>> messages.
>>>
>>> >
>>> > Thank you for your help.
>>> > Santosh
>>> >
>>>
>>>
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>>>
>>


Re: Simple IN vs IN values performace

2021-02-22 Thread Michael Lewis
>
> Insert the values into a TEMPORARY TABLE, then join that to your main
> table?
>

In my experience, this is very performant but needs an analyze command
after populating the temp table to ensure there are statistics so the plan
doesn't go awry. Otherwise, I'm not sure it is different from a
materialized CTE or a subquery with OFFSET 0 at the end.


Re: pg_restore - generated column - not populating

2021-02-22 Thread Santosh Udupi
The logs don't show errors. I came across something similar here
https://www.postgresql-archive.org/Dumping-restoring-fails-on-inherited-generated-column-td6114378.html


but not sure what the solution is.

On Mon, Feb 22, 2021 at 5:57 PM Santosh Udupi  wrote:

> I used the following commands for dump
>
> pg_dump -c mydb | gzip -9 > mydb.gz
> pg_dump -C -Fc mydb > mydb.backup
> pg_dump -Ft mydb > mydb.tar
>
> For restore, I created a blank database by issuing the command "createdb
> mydb" and then tried
>
> gunzip -c mydb.gz | psql mydb
> pg_restore -d mydb mydb.backup
> pg_restore -Ft -d mydb mydb.backup
>
> I have tried -c, -C, schema only etc but nothing has worked so far.
>
> I didn't check the Postgres logs. Thanks for the suggestion. I will check
> that.
>
>
>
> On Mon, Feb 22, 2021 at 5:16 PM Adrian Klaver 
> wrote:
>
>> On 2/22/21 5:08 PM, Santosh Udupi wrote:
>> > Hi all,
>> >
>> > My database has tables with generated columns. I altered a table and
>> > added a generated column as below:
>> >
>> > alter table billing add primary_bill_to_id int   GENERATED ALWAYS as
>> > ((info->>'vp')::int) stored
>> >
>> >
>> > Now, when I do the pg_dump and pg_restore, this column does not get
>> > populated. It remains null
>> >
>> > "Info" is the jsonb column in the table and info->>'vp' has values in
>> > multiple rows but still generated column "primary_bill_to_id" is null
>> > after the restore
>> >
>> > I am using postgres version 13
>> >
>> > Can you tell me what am I missing?
>>
>> Without the commands you used to do the dump and restore it will be
>> difficult to come to any conclusions. It would also be helpful to look
>> at the Postgres logs from the restore to see if there are any error
>> messages.
>>
>> >
>> > Thank you for your help.
>> > Santosh
>> >
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>


Re: pg_restore - generated column - not populating

2021-02-22 Thread Santosh Udupi
I used the following commands for dump

pg_dump -c mydb | gzip -9 > mydb.gz
pg_dump -C -Fc mydb > mydb.backup
pg_dump -Ft mydb > mydb.tar

For restore, I created a blank database by issuing the command "createdb
mydb" and then tried

gunzip -c mydb.gz | psql mydb
pg_restore -d mydb mydb.backup
pg_restore -Ft -d mydb mydb.backup

I have tried -c, -C, schema only etc but nothing has worked so far.

I didn't check the Postgres logs. Thanks for the suggestion. I will check
that.



On Mon, Feb 22, 2021 at 5:16 PM Adrian Klaver 
wrote:

> On 2/22/21 5:08 PM, Santosh Udupi wrote:
> > Hi all,
> >
> > My database has tables with generated columns. I altered a table and
> > added a generated column as below:
> >
> > alter table billing add primary_bill_to_id int   GENERATED ALWAYS as
> > ((info->>'vp')::int) stored
> >
> >
> > Now, when I do the pg_dump and pg_restore, this column does not get
> > populated. It remains null
> >
> > "Info" is the jsonb column in the table and info->>'vp' has values in
> > multiple rows but still generated column "primary_bill_to_id" is null
> > after the restore
> >
> > I am using postgres version 13
> >
> > Can you tell me what am I missing?
>
> Without the commands you used to do the dump and restore it will be
> difficult to come to any conclusions. It would also be helpful to look
> at the Postgres logs from the restore to see if there are any error
> messages.
>
> >
> > Thank you for your help.
> > Santosh
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: pg_restore - generated column - not populating

2021-02-22 Thread Adrian Klaver

On 2/22/21 5:08 PM, Santosh Udupi wrote:

Hi all,

My database has tables with generated columns. I altered a table and 
added a generated column as below:


alter table billing add primary_bill_to_id int   GENERATED ALWAYS as
((info->>'vp')::int) stored


Now, when I do the pg_dump and pg_restore, this column does not get 
populated. It remains null


"Info" is the jsonb column in the table and info->>'vp' has values in 
multiple rows but still generated column "primary_bill_to_id" is null 
after the restore


I am using postgres version 13

Can you tell me what am I missing?


Without the commands you used to do the dump and restore it will be 
difficult to come to any conclusions. It would also be helpful to look 
at the Postgres logs from the restore to see if there are any error 
messages.




Thank you for your help.
Santosh




--
Adrian Klaver
adrian.kla...@aklaver.com




pg_restore - generated column - not populating

2021-02-22 Thread Santosh Udupi
Hi all,

My database has tables with generated columns. I altered a table and added
a generated column as below:

alter table billing add primary_bill_to_id int   GENERATED ALWAYS as
((info->>'vp')::int) stored


Now, when I do the pg_dump and pg_restore, this column does not get
populated. It remains null

"Info" is the jsonb column in the table and  info->>'vp' has values in
multiple rows but still generated column "primary_bill_to_id" is null after
the restore

I am using postgres version 13

Can you tell me what am I missing?

Thank you for your help.
Santosh


Re: Simple IN vs IN values performace

2021-02-22 Thread Ron

On 2/22/21 9:00 AM, Oleksandr Voytsekhovskyy wrote:

Greetings,

We have queries with IN filters with long list of INT values

Sometimes, they running extremely slow, and I have found suggestion to use 
syntax

Field IN (VALUES(1465), (1478), ...
Instead of
Field IN (1465, 1478, ...
On some cases it helps, but on other it makes query running 1000+ times slower


Insert the values into a TEMPORARY TABLE, then join that to your main table?



So the issue is:
- some queries much faster with simple IN
- some queries much faster with IN + VALUES

Here is explain on case when it’s extremely slow:

->  HashAggregate  (cost=5.78..9.62 rows=385 width=4)
Group Key: ""*VALUES*"".column1"
->  Values Scan on ""*VALUES*""  (cost=0.00..4.81 rows=385 
width=4)"
What is the right way to pass long INT values list to IN filter?

I am using PostgreSQL 13.1 on Ubuntu


--
Angular momentum makes the world go 'round.


Re: fdatasync performance problem with large number of DB files

2021-02-22 Thread Tom Lane
Michael Brown  writes:
> I presume the reason postgres doesn't blindly run a sync() is that we
> don't know what other I/O is on the system and it'd be rude to affect
> other services. That makes sense, except for our environment the work
> done by the recursive fsync is orders of magnitude more disruptive than
> a sync().

Hmm.

> * is there a knob missing we can configure?

No.  The trouble with sync() is that per POSIX, it only schedules the
writes; there's no way to tell when the work has been done.  I see
that Linux offers stronger promises in this department, but I don't
think that's very portable.  Moreover, even on Linux there's no
way to detect whether any of the writes failed.

Barring some solution to those problems, we would be unlikely to take
a patch that uses sync() instead of fsync().

regards, tom lane




Re: Simple IN vs IN values performace

2021-02-22 Thread Michael Lewis
Wouldn't using “= any(array)” change how the query is planned? Or is the
concern just parsing the values?


Re: Migrate database to different versions

2021-02-22 Thread David G. Johnston
On Monday, February 22, 2021, Lorenzzo Egydio Mollinar da Cruz <
loren...@iftm.edu.br> wrote:

> I need to migrate a database from postgresql 9 to postgresql 12, as I will
> update the version of my MOODLE and the current version does not support
> postgres 9, is there any procedure for me to migrate this database from 9
> to version 12 of the database?
>
>
 https://www.postgresql.org/docs/current/pgupgrade.html

David J.


Re: Simple IN vs IN values performace

2021-02-22 Thread David G. Johnston
On Monday, February 22, 2021, Oleksandr Voytsekhovskyy  wrote:

> What is the right way to pass long INT values list to IN filter
>

Don’t.

Pass in a delimited string, then parse that string into an array and use “=
any(array)”.

This has the primary benefit of making the input a single parameter.

David J.


Migrate database to different versions

2021-02-22 Thread Lorenzzo Egydio Mollinar da Cruz
I need to migrate a database from postgresql 9 to postgresql 12, as I will
update the version of my MOODLE and the current version does not support
postgres 9, is there any procedure for me to migrate this database from 9
to version 12 of the database?

Att.

*Lorenzzo Egydio Mollinar da Cruz**CTIC - CAUPT - IFTM*
(34) 3326-1407  /  9285-2836
loren...@iftm.edu.br


[image: Mailtrack]

Remetente
notificado por
Mailtrack

22/02/21
14:42:07


fdatasync performance problem with large number of DB files

2021-02-22 Thread Michael Brown
We've encountered a production performance problem with pg13 related to
how it fsyncs the whole data directory in certain scenarios, related to
what Paul (bcc'ed) described in a post to pgsql-hackers [1].

Background:

We've observed the full recursive fsync is triggered when

* pg_basebackup receives a streaming backup (via [2] fsync_dir_recurse
or fsync_pgdata) unless --no-sync is specified
* postgres starts up unclean (via [3] SyncDataDirectory)

We run multiple postgres clusters and some of those clusters have many
(~450) databases (one database-per-customer) meaning that the postgres
data directory has around 700,000 files.

On one of our less loaded servers this takes ~7 minutes to complete, but
on another [4] this takes ~90 minutes.

Obviously this is untenable risk. We've modified our process that
bootstraps a replica via pg_basebackup to instead do "pg_basebackup
--no-sync…" followed by a "sync", but we don't have any way to do the
equivalent for the postgres startup.

I presume the reason postgres doesn't blindly run a sync() is that we
don't know what other I/O is on the system and it'd be rude to affect
other services. That makes sense, except for our environment the work
done by the recursive fsync is orders of magnitude more disruptive than
a sync().

My questions are:

* is there a knob missing we can configure?
* can we get a knob to use a single sync() call instead of a recursive
fsync()?
* would you be open to merging a patch providing said knob?
* is there something else we missed?

Thanks!

[1]:
https://www.postgresql.org/message-id/flat/caeet0zhgnbxmi8yf3ywsdzvb3m9cbdsgzgftxscq6agcbzc...@mail.gmail.com
[2]:
https://github.com/postgres/postgres/blob/master/src/bin/pg_basebackup/pg_basebackup.c#L2181
[3]:
https://github.com/postgres/postgres/blob/master/src/backend/access/transam/xlog.c#L6495
[4]: It should be identical config-wise. It isn't starved for IO but
does have other regular write workloads

-- 
Michael Brown
Civilized Discourse Construction Kit, Inc.
https://www.discourse.org/





Re: Re: Problem enabling LDAP login

2021-02-22 Thread Marcelo Lacerda
Thank you joão, that solved the problem!


Simple IN vs IN values performace

2021-02-22 Thread Oleksandr Voytsekhovskyy
Greetings,

We have queries with IN filters with long list of INT values

Sometimes, they running extremely slow, and I have found suggestion to use 
syntax
Field IN (VALUES(1465), (1478), ...
Instead of 
Field IN (1465, 1478, ...
On some cases it helps, but on other it makes query running 1000+ times slower

So the issue is:
- some queries much faster with simple IN
- some queries much faster with IN + VALUES

Here is explain on case when it’s extremely slow:

->  HashAggregate  (cost=5.78..9.62 rows=385 width=4)
   Group Key: ""*VALUES*"".column1" 
   ->  Values Scan on ""*VALUES*""  (cost=0.00..4.81 rows=385 
width=4)" 
What is the right way to pass long INT values list to IN filter?

I am using PostgreSQL 13.1 on Ubuntu

Re: JSONB_AGG: aggregate function calls cannot be nested

2021-02-22 Thread Alexander Farber
Ahh, thank you all -

 select row_to_json (x) FROM( SELECT
jsonb_agg(day) AS day,
jsonb_agg(completed) AS completed,
jsonb_agg(expired) AS expired
from (
SELECT TO_CHAR(finished, '-MM-DD') AS day,
count(*) filter (where reason in ('regular', 'resigned')) AS
completed,
count(*) filter (where reason = 'expired') AS expired
FROM words_games
WHERE finished > CURRENT_TIMESTAMP - INTERVAL '2 week'
GROUP BY day
) t
ORDER BY day) x;



row_to_json



--
--

 {"day":["2021-02-16", "2021-02-20", "2021-02-10", "2021-02-09",
"2021-02-15", "2021-02-19", "2021-02-17", "2021-02-11", "2021-02-22",
"2021-02-08", "2021-02-
14", "2021-02-21", "2021-02-12", "2021-02-13",
"2021-02-18"],"completed":[744, 802, 864, 770, 767, 745, 837, 792, 751, 32,
843, 808, 838, 853, 751],"expired":
[237, 168, 230, 263, 203, 257, 206, 184, 337, 11, 231, 380, 231, 293, 196]}
(1 row)


Re: Streaming replication between different OS

2021-02-22 Thread Peter J. Holzer
On 2021-02-22 23:02:12 +0530, Atul Kumar wrote:
> As I am new to postgres, could you help me to in how to check collation

show LC_COLLATE;

> and what is de_DE locale ?

The locale (i.e. language specific rules (sorting, formatting of
numbers, dates, etc.) for German ("de") as spoken in Germany ("DE").

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Streaming replication between different OS

2021-02-22 Thread Atul Kumar
Hi Tom,

As I am new to postgres, could you help me to in how to check collation and
what is de_DE locale ?


Regards






On Monday, February 22, 2021, Tom Lane  wrote:

> Ganesh Korde  writes:
> > On Mon, 22 Feb 2021, 11:48 am Atul Kumar,  wrote:
> >> I have postgres 9.6 cluster running on Centos 6.8, so I just wanted to
> >> know that can I configure streaming replication with same postgres
> version
> >> i.e 9.6 running on centos 7.
>
> > Should not be a problem if both OS archtecture (32 bit/ 64 bit) are same.
>
> The other thing you have to worry about is whether the collations you
> use sort the same on both systems ... if they don't, you'll have
> effectively-corrupt indexes on text columns on the standby.
>
> According to
>
> https://wiki.postgresql.org/wiki/Locale_data_changes
>
> you should be okay for rhel/centos 6 to 7 migration unless you
> use de_DE locale.  But I don't know how thorughly that's been
> checked.
>
> Using logical not physical replication might be safer.
>
> regards, tom lane
>


Re: Streaming replication between different OS

2021-02-22 Thread Peter J. Holzer
On 2021-02-22 10:36:56 -0500, Tom Lane wrote:
> The other thing you have to worry about is whether the collations you
> use sort the same on both systems ... if they don't, you'll have
> effectively-corrupt indexes on text columns on the standby.
> 
> According to
> 
> https://wiki.postgresql.org/wiki/Locale_data_changes
> 
> you should be okay for rhel/centos 6 to 7 migration unless you
> use de_DE locale.  But I don't know how thorughly that's been
> checked.

I wrote this little script:
https://git.hjp.at:3000/hjp/pgcollate/src/branch/master/pgcollate

It just creates a table with every valid Unicode codepoint (and a few
short strings) and dumps it in sorted order. The idea is that you run it
and both systems (or before and after an upgrade) and compare the
outputs. If they differ, text indexes are probably broken.

Note that it only checks the sort order of single codepoints thoroughly,
not that of longer strings. So even if the output is the same, the
collation may still be different. Since the number of possible
combinations is essentially infinite, a systematic check is impossible.
I would be happy to add additional strings, though, Especially if they
are known to cause trouble.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Streaming replication between different OS

2021-02-22 Thread Tom Lane
Ganesh Korde  writes:
> On Mon, 22 Feb 2021, 11:48 am Atul Kumar,  wrote:
>> I have postgres 9.6 cluster running on Centos 6.8, so I just wanted to
>> know that can I configure streaming replication with same postgres version
>> i.e 9.6 running on centos 7.

> Should not be a problem if both OS archtecture (32 bit/ 64 bit) are same.

The other thing you have to worry about is whether the collations you
use sort the same on both systems ... if they don't, you'll have
effectively-corrupt indexes on text columns on the standby.

According to

https://wiki.postgresql.org/wiki/Locale_data_changes

you should be okay for rhel/centos 6 to 7 migration unless you
use de_DE locale.  But I don't know how thorughly that's been
checked.

Using logical not physical replication might be safer.

regards, tom lane




Re: Problem enabling LDAP login

2021-02-22 Thread João Gaspar
Hi,
Marcelo checks this I have a similar problem with the LDAP filter in
pg_hba.conf. I don't know if helps in your case. In my case, I needed to
add the OU (Organizational Unit)

https://www.postgresql.org/message-id/CAM%2BzXj0b71fckDSTxPwX58ze-9mtD4UxbktzGCmUYAnjoZti3A%40mail.gmail.com



Best regards,
João Gaspar


Marcelo Lacerda  escreveu no dia segunda,
22/02/2021 à(s) 14:02:

> I'm having some trouble configuring ldap login to postgres. I have
> configured LDAP on pg_hba.conf and postgres picks up the correct
> configuration during login but I get an error message whenever I attempt to
> login with psql to a database named teste.
>
> psql: error: could not connect to server: FATAL:  LDAP authentication
> failed for user "my_user_id"
>
> Looking at the log I get a more detailed message but it isn't very useful
> either
>
> 2021-02-22 10:41:29 -03:172.17.1.139(54178):my_user_id@teste:[50727]:
> LOG:  could not search LDAP for filter "(sAMAccountName=my_user_id)" on
> server "my_ldap_server": Operations error
> 2021-02-22 10:41:29 -03:172.17.1.139(54178):my_user_id@teste:[50727]:
> DETAIL:  LDAP diagnostics: 04DC: LdapErr: DSID-0C09075A, comment: In
> order to perform this operation a successful bind must be completed on the
> connection., data 0, v1db1
> 2021-02-22 10:41:29 -03:172.17.1.139(54178):my_user_id@teste:[50727]:
> FATAL:  LDAP authentication failed for user "my_user_id"
> 2021-02-22 10:41:29 -03:172.17.1.139(54178):my_user_id@teste:[50727]:
> DETAIL:  Connection matched pg_hba.conf line 96: "hostall
> all 172.17.1.0/24 ldap
> ldapserver="my_ldap_server" ldapbasedn="mybasedn"
> ldapsearchattribute="sAMAccountName" ldapbinddn="CN=my_bind_dn"
> ldapbindpasswd="my_bind_password"
>
> Searching postgres source code I was able to find where the problem
> happens[1] however it makes things a bit more confusing because from my
> understanding postgres was able to successfully bind to the LDAP server in
> line 2676 but on line 2700 it reported that a bind was necessary before
> querying.
>
> Looking at the tcp packages with tcpdump I was able to notice that
> openldap does in fact receive my_user_id information from the server but
> for some reason it tries to perform another operation afterwards and fails.
>
> Can anyone help me with this?
>
> [1] -
> https://github.com/postgres/postgres/blob/ca3b37487be333a1d241dab1bbdd17a211a88f43/src/backend/libpq/auth.c#L2700
>


cannot promote after recovery for PITR

2021-02-22 Thread Luca Ferrari
I'm running 12.5, I've restored a backup copy (from barman) to a
specific restore point.
The system has gone into pause, and I've checked that everything is as
I was expecting.
Now I'm unable to promote the cluster:

testdb=# SELECT pg_is_wal_replay_paused();
 pg_is_wal_replay_paused
-
 t
(1 row)

testdb=# select pg_promote( true, 120 );
WARNING:  server did not promote within 120 seconds
 pg_promote

 f


and the only thing I've got into the logs is

WARNING:  server did not promote within 120 seconds

Similarly, I cannot promote via pg_ctl even if I've checked that
PGDATA/promote is there.


If, instead, I do pg_wal_replay_resume(), the server is promoted (of
course not at the PITR I want).
Am I missing something?

Thanks,
Luca




Re: JSONB_AGG: aggregate function calls cannot be nested

2021-02-22 Thread David G. Johnston
On Monday, February 22, 2021, Alexander Farber 
wrote:

>
>
> but how to get a JSON map of lists here? I am trying:
>
> {
>"day": [ "2021-02-08", "2021-02-09", ... ],
>"completed": [ 475, 770, ...],
>"expired": [ 155, 263 , ...]
> }
>

If you want the days aggregated then don’t “group by day”

David J.


Re: JSONB_AGG: aggregate function calls cannot be nested

2021-02-22 Thread Alexander Farber
Thank you Thomas, this results in

 select
day AS day,
jsonb_agg(completed) AS completed,
jsonb_agg(expired) AS expired
from (
SELECT TO_CHAR(finished, '-MM-DD') AS day,
count(*) filter (where reason in ('regular', 'resigned')) AS
completed,
count(*) filter (where reason = 'expired') AS expired
FROM words_games
WHERE finished > CURRENT_TIMESTAMP - INTERVAL '2 week'
GROUP BY day
) t
GROUP BY day
ORDER BY day;
day | completed | expired
+---+-
 2021-02-08 | [481] | [155]
 2021-02-09 | [770] | [263]
 2021-02-10 | [864] | [230]
 2021-02-11 | [792] | [184]
 2021-02-12 | [838] | [231]
 2021-02-13 | [853] | [293]
 2021-02-14 | [843] | [231]
 2021-02-15 | [767] | [203]
 2021-02-16 | [744] | [237]
 2021-02-17 | [837] | [206]
 2021-02-18 | [751] | [196]
 2021-02-19 | [745] | [257]
 2021-02-20 | [802] | [168]
 2021-02-21 | [808] | [380]
 2021-02-22 | [402] | [255]
(15 rows)

but how to get a JSON map of lists here? I am trying:

select row_to_json (x) FROM (SELECT
day AS day,
jsonb_agg(completed) AS completed,
jsonb_agg(expired) AS expired
from (
SELECT TO_CHAR(finished, '-MM-DD') AS day,
count(*) filter (where reason in ('regular', 'resigned')) AS
completed,
count(*) filter (where reason = 'expired') AS expired
FROM words_games
WHERE finished > CURRENT_TIMESTAMP - INTERVAL '2 week'
GROUP BY day
) t
GROUP BY day
ORDER BY day) x;
  row_to_json

 {"day":"2021-02-08","completed":[475],"expired":[155]}
 {"day":"2021-02-09","completed":[770],"expired":[263]}
 {"day":"2021-02-10","completed":[864],"expired":[230]}
 {"day":"2021-02-11","completed":[792],"expired":[184]}
 {"day":"2021-02-12","completed":[838],"expired":[231]}
 {"day":"2021-02-13","completed":[853],"expired":[293]}
 {"day":"2021-02-14","completed":[843],"expired":[231]}
 {"day":"2021-02-15","completed":[767],"expired":[203]}
 {"day":"2021-02-16","completed":[744],"expired":[237]}
 {"day":"2021-02-17","completed":[837],"expired":[206]}
 {"day":"2021-02-18","completed":[751],"expired":[196]}
 {"day":"2021-02-19","completed":[745],"expired":[257]}
 {"day":"2021-02-20","completed":[802],"expired":[168]}
 {"day":"2021-02-21","completed":[808],"expired":[380]}
 {"day":"2021-02-22","completed":[410],"expired":[255]}
(15 rows)

While I would actually need:

{
   "day": [ "2021-02-08", "2021-02-09", ... ],
   "completed": [ 475, 770, ...],
   "expired": [ 155, 263 , ...]
}

And then I could feed the data into the Chart.js shown at the bottom of my
web page https://slova.de/top

Currently I do a simple SELECT query and construct the JSON map of list in
the Java code of my servlet

Thank you
Alex


Problem enabling LDAP login

2021-02-22 Thread Marcelo Lacerda
I'm having some trouble configuring ldap login to postgres. I have
configured LDAP on pg_hba.conf and postgres picks up the correct
configuration during login but I get an error message whenever I attempt to
login with psql to a database named teste.

psql: error: could not connect to server: FATAL:  LDAP authentication
failed for user "my_user_id"

Looking at the log I get a more detailed message but it isn't very useful
either

2021-02-22 10:41:29 -03:172.17.1.139(54178):my_user_id@teste:[50727]: LOG:
 could not search LDAP for filter "(sAMAccountName=my_user_id)" on server
"my_ldap_server": Operations error
2021-02-22 10:41:29 -03:172.17.1.139(54178):my_user_id@teste:[50727]:
DETAIL:  LDAP diagnostics: 04DC: LdapErr: DSID-0C09075A, comment: In
order to perform this operation a successful bind must be completed on the
connection., data 0, v1db1
2021-02-22 10:41:29 -03:172.17.1.139(54178):my_user_id@teste:[50727]:
FATAL:  LDAP authentication failed for user "my_user_id"
2021-02-22 10:41:29 -03:172.17.1.139(54178):my_user_id@teste:[50727]:
DETAIL:  Connection matched pg_hba.conf line 96: "hostall
all 172.17.1.0/24 ldap ldapserver="my_ldap_server"
ldapbasedn="mybasedn" ldapsearchattribute="sAMAccountName"
ldapbinddn="CN=my_bind_dn" ldapbindpasswd="my_bind_password"

Searching postgres source code I was able to find where the problem
happens[1] however it makes things a bit more confusing because from my
understanding postgres was able to successfully bind to the LDAP server in
line 2676 but on line 2700 it reported that a bind was necessary before
querying.

Looking at the tcp packages with tcpdump I was able to notice that openldap
does in fact receive my_user_id information from the server but for some
reason it tries to perform another operation afterwards and fails.

Can anyone help me with this?

[1] -
https://github.com/postgres/postgres/blob/ca3b37487be333a1d241dab1bbdd17a211a88f43/src/backend/libpq/auth.c#L2700


Re: Streaming replication between different OS

2021-02-22 Thread Ganesh Korde
Should not be a problem if both OS archtecture (32 bit/ 64 bit) are same.

On Mon, 22 Feb 2021, 11:48 am Atul Kumar,  wrote:

> Hi,
>
>
> I have postgres 9.6 cluster running on Centos 6.8, so I just wanted to
> know that can I configure streaming replication with same postgres version
> i.e 9.6 running on centos 7.
>
> Suggestions are welcome as the Centos versions are different one is 6.8
> and second one is 7.
>
> Also please let me know if there will be any challenge in case of failover.
>