Re: pg_stat_statements IN problem

2023-10-02 Thread Wim Bertels
byme@byme.email schreef op ma 02-10-2023 om 16:19 [+]:
> 
> 
> Is there a possibility the pg_stat_statements will be improved with
> handling IN? This problem makes it so much less useful right now.

not sure what the question is,
but if you change pg_stat_statements with another view/table,
the problem/answer would be the same

https://www.postgresql.org/docs/current/functions-comparisons.html#FUNCTIONS-COMPARISONS-IN-SCALAR




Re: specifying multiple options in URI psql behaviour?

2023-10-02 Thread Wim Bertels
Tom Lane schreef op ma 02-10-2023 om 10:21 [-0400]:
> Wim Bertels  writes:
> > * but if you put more than 1 option, then it doesn't?:
> 
> > #psql
> > postgresql://myuser@myserver/mydb?connect_timeout=10&target_session
> > _attrs=any
> 
> Maybe you forgot to quote that?  Ampersand is a shell metacharacter.

yes indeed, & bg and fg, thank you Tom

# psql
'postgresql://myuser@myserver/mydb?connect_timeout=10&target_session_at
trs=any'

or just

# psql
postgresql://myuser@myserver/mydb?connect_timeout=10'&'target_session_a
ttrs=any

> 
> (It does seem to work for me, as long as I put quotes around the
> URL.)
> 
> regards, tom lane



specifying multiple options in URI psql behaviour?

2023-10-02 Thread Wim Bertels
Hello,

can anyone confirm this?:

* connections with at most 1 option after the ?-mark sign work:

like

#psql postgresql://myuser@myserver/mydb?connect_timeout=10

or

#psql postgresql://myuser@myserver/mydb?target_session_attrs=any

* but if you put more than 1 option, then it doesn't?:

#psql
postgresql://myuser@myserver/mydb?connect_timeout=10&target_session_att
rs=any

--

https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING-URIS

https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS

mvg,
Wim


Re: How To: A large [2D] matrix, 100,000+ rows/columns

2023-06-09 Thread Wim Bertels
Joe Conway schreef op vr 09-06-2023 om 09:16 [-0400]:
> On 6/8/23 22:17, Pat Trainor wrote:
> > I need to have a very large matrix to maintain & query, and if not 
> > (1,600 column limit), then how could such data be broken down to
> > work?
> 
>   100,000 rows *
>   100,000 columns *
>   8 bytes (assuming float8)
> = about 80 GB per matrix if I got the math correct.
> 
> 

based on my personal experience i would not use postgres in the case
where you need many columns, u can work around this with json for
example, but it will likely end up being less easy to work with

as Joe replied: R or Python are probably a better fit,
or another database that can easily handle a lot of columns,
postgres is a great database, but not when you need a lot of columns

(as you noted+:
there might be another backend storage for postgres that can handle
this better (or in the future?), but i don't think there is one;
also there is the header for which standard 8K is provisioned anyway,
so that is the first bottleneck (you can change this value, if you
compile postgres yourself)
https://www.postgresql.org/docs/current/limits.html )

Wim




Re: nested xml/json to table

2023-03-17 Thread Wim Bertels
Thomas Kellerer schreef op vr 17-03-2023 om 11:21 [+0100]:
> Wim Bertels schrieb am 17.03.2023 um 11:05:
> > what would be the general idea: "easily" convert an hierarchical
> > structure like json or xml to a table; for example creating columns
> > by
> > appending the key-names when going doing down the three, using null
> > for
> > empty values, adding more columns as needed by the given structure.
> > (1-way operation)
> 
> 
> You can't have a function that returns a different set of columns
> each time you call it
> (without specifying the output columns - which you don't want).

Hello Thomas,

thanks for the feedback,
i was wondering in the likes of existing built-in functions or
extensions (not CREATE FUNCTION)

mvg,
Wim




nested xml/json to table

2023-03-17 Thread Wim Bertels
Hello,

in python pandas there is for example a json_normalize function,
i didn't find something similar or better in postgresql?

what would be the general idea: "easily" convert an hierarchical
structure like json or xml to a table; for example creating columns by
appending the key-names when going doing down the three, using null for
empty values, adding more columns as needed by the given structure.
(1-way operation)

a few conceptual gists:
jsonX=
{
"glossary": {
"title": "example glossary",
"GlossDiv": {
"title": "S",
"GlossList": {
"GlossEntry": {
"ID": "SGML",
"SortAs": "SGML",
"GlossTerm": "Sta.."; 
"Acronym": "SGML",
"Abbrev": "ISO 8879:1986",
"GlossDef": {
"para": "A meta-m..",
"GlossSeeAlso": 
["GML", "XML"]
},
"GlossSee": "markup"
}
}
}
}
}

select *
from json_to_table('jsonX');
-- generated columns with no data/only nulls could be removed..
-- arrays could be unnested in the process as well

glossary | glossary.title   | glossary.title.GlossDiv.title | .. 
-
null | example glossary | S |
..

the last column: glossary.title.GlossDiv.GlossList.GlossEntry.GlossSee
with value "markup"

---

what if there are different structures that need to be combined?
(they could be added in the same manner as before)

jsonY=
{
s1:[{
"f1": "a",
"f2": "b",
"f3": { "f3.1": "c",
"f3.2": "d"}
   },
   { 
"f1": "e",
"f4": "g"
   }
   ]
}
   
select *
from json_to_table('jsonY');
-- generated columns with no data/only nulls could be removed..
-- separator sign is untrusted

s1  | s1.f1 | s1.f2 | s1.f3 | s1.f3.f3.1 | s1.f3.f3.2 | s1.f4
-
null| a | b | null  | c  | d  | null
null| e | null  | null  | null   | null   | g


any ideas or suggestions (apart from plpython)? 
Wim 


Re: Best Open Source OS for Postgresql

2023-02-01 Thread Wim Bertels
Hello Giovanni,

it depends,
do you know a truly open source os without proprietary blobs for firmware or 
device drivers?

Debian used to try to make a clear separation in this with 'main' being the 
default section when installing,
unfortunately this practically no longer applies to proprietary firmware if i 
understand it correctly, unless you check yourself:
https://www.debian.org/vote/2022/vote_003
vs before:
https://www.debian.org/releases/squeeze/i386/release-notes/ch-whats-new.en.html#nonfree-firmware
which i guess is still the case?, but now it gets loaded/installed by default
(easy functionality over open source)

I would choose a distro with LTS (long term support),
as the last post mentions Ubuntu and Mint,
this would only apply to Ubuntu LTS (or up in the family tree Debian LTS)

i have had good experiences with the link from Brent,
but i imagine other serious LTS distros should work well as well

hth,
Wim

Brent Wood schreef op wo 01-02-2023 om 06:45 [+]:
I would suggest any major Ubuntu based distro, I have a personal preference for 
Mint.

I'd also suggest you set up the official Postgres repo, rather than using a 
distro repo, so your installs come directly from there.

https://wiki.postgresql.org/wiki/Apt






Re: Creating constraint dynamically

2022-08-22 Thread Wim Bertels
sivapostg...@yahoo.com schreef op ma 22-08-2022 om 07:29 [+]:
> 
> ALTER TABLE public.tx_barcode_stock 
> ADD CONSTRAINT "tx_barcode_stock_CK1" CHECK 
> ( (branchcode = '1'::bpchar  and barcodeitem = 'Y'::bpchar and
> closingstock >= 0::numeric)  Or (branchcode = '1' and barcodeitem =
> 'N'::bpchar and closingstock >= 0::numeric )  Or (branchcode =
> '2'::bpchar  and barcodeitem = 'Y'::bpchar and closingstock >=
> 0::numeric)  Or (branchcode = '2' and  barcodeitem = 'N'::bpchar and
> closingstock >= 0::numeric ) ) NOT VALID; 
> 
> After creation, when we check what we find is  [ in PgAdmin ]
> ALTER TABLE public.tx_barcode_stock
>     ADD CONSTRAINT "tx_barcode_stock_CK1" CHECK (branchcode::bpchar =
> '1'::bpchar AND barcodeitem = 'Y'::bpchar AND closingstock >=
> 0::numeric OR branchcode::text = '1'::text AND barcodeitem =
> 'N'::bpchar AND closingstock >= 0::numeric OR branchcode::bpchar =
> '2'::bpchar AND barcodeitem = 'Y'::bpchar AND closingstock >=
> 0::numeric OR branchcode::text = '2'::text AND barcodeitem =
> 'N'::bpchar AND closingstock >= 0::numeric)
>     NOT VALID;
> 
> We have only one bracket, in the final updated one.
> 
> Since there are AND and OR conditions, without brackets the whole
> conditions becomes useless. 

did you see
https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-PRECEDENCE
?


mvg,
Wim


Re: Can I get the number of results plus the results with a single query?

2022-08-22 Thread Wim Bertels
Perry Smith schreef op ma 15-08-2022 om 08:49 [-0500]:
> I like to have what I call “baby sitting” messages such as “Completed
> 15 out of 1023”.  To do this, I need the number of results a query
> returns but I also need the results.
> 
> Currently I’m doing this with two queries such as:
> 
>     SELECT COUNT(*) FROM table WHERE …. expression …
>     SELECT * FROM table WHERE …. expression …
> 
> But this requires two queries.  Is there a way to do the same thing
> with just one query somehow?
> 
> I’ve been toying with row_number() and then sort by row_number
> descending and pick off the first row as the total number.  

if i understand your question:

count(*) over ()

(u can use aggregate functions as well, not only window functions)



mvg,
Wim


Re: \set \e and newline

2022-02-28 Thread Wim Bertels
Laurenz Albe schreef op vr 25-02-2022 om 10:33 [+0100]:
> On Thu, 2022-02-24 at 18:16 +0100, Wim Bertels wrote:
> This was interpreted as if you had entered it in a single line:
> 
> \set x 1 select :x
> 
> So "x" is now "1select1;".

yes

> 
> I don't quite understand what you mean, but the behavior of \e
> changed
> (got more sane) in v14.

yes,
(this is just a summary of different cases)

> 
> > variation of case 2:
> > postgres=# \e
> > 
> > -- enter the following the editor (the same code as above)
> > 
> > \set x 1
> > select :x;
> > 
> > -- save and quit
> > 
> > postgres=# select :x;
> >  select1select1 
> > 
> >   1
> 
> Now "x" is "1select1select1;;", so you are running
> 
>  select 1select1select1;;;
> 
> Here "select1select1" is interpreted as alias, so you get that column
> heading.
> 
> You can use "\echo :x" to see the value of a variable.
> 

Thanks for the feedback Laurenz

I guess the main remark is, it not so intuitive that \e behaves
differently then \i

>From https://www.postgresql.org/docs/current/app-psql.html
"
\e..
Type semicolon or \g to send it, or \r to cancel it by clearing the
query buffer.

Treating the buffer as a single line primarily affects meta-commands:
whatever is in the buffer after a meta-command will be taken as
argument(s) to the meta-command, even if it spans multiple lines. (Thus
you cannot make meta-command-using scripts this way. Use \i for that.)
"

### case 1: (\e)
\set x 1
select :x ;

### case 2: (\e)
\set x 1
select :x \g

resulting in the same value for x (\echo :x), but different
intermediate output with case 1 and case 2,

"
\g..
If the current query buffer is empty, the most recently sent query is
re-executed instead. Except for that behavior, \g without any arguments
is essentially equivalent to a semicolon.
"

mvg,
Wim







\set \e and newline

2022-02-24 Thread Wim Bertels
Hello,

a quick question about
https://www.postgresql.org/docs/current/app-psql.html
and the \set option

it seems that \set does not interpret an 'enter' interactively the same
as an 'enter' in a short script made with \e 

###
* case 1:
postgres=# \set x 1
postgres=# select :x;
 ?column? 
--
1
###


###
* case 2:
postgres=# \e

-- enter the following the editor (the same code as above)

\set x 1
select :x;

-- save and quit

postgres=#

-- no output
-- curiosly: again \e

postgres=#\e

-- shows select 1; in the editor in v14
-- shows nothing in the editor in v13 (or recursive the content being
cut off)
###


###
variation of case 2:
postgres=# \e

-- enter the following the editor (the same code as above)

\set x 1
select :x;

-- save and quit

postgres=# select :x;
 select1select1 

  1

###


Doing the same thing with \i instead of \e does behave like i would
expect, ie the same as case 1.

This is referred to as meta-commands in de manual which are to be
affected when using \e, but \g instead of ; seems to work (while it
should be the same?)


###
variation of case 2 using \g instead of ; :
postgres=# \e

-- enter the following the editor (the same code as above)

\set x 1
select :x \g

-- save and quit

 ?column? 
--
1
(1 row)

postgres=# 
###


-- 
mvg,
Wim






Re: Pgcrypto extension - decrypt(encrypt(... not returning original data?

2021-12-01 Thread Wim Bertels
hex 2 ascii

SQL Padawan schreef op wo 01-12-2021 om 18:37 [+]:
> \x6461

-- 
mvg,
Wim Bertels
--
https://coronalert.be

Lector
UC Leuven-Limburg
--
Truth is the most valuable thing we have -- so let us economize it.
-- Mark Twain





Re: The tragedy of SQL

2021-09-14 Thread Wim Bertels
Is it possible that this is mainly an emotional discussion?

Raymond Brinzer schreef op di 14-09-2021 om 02:39 [-0400]:
> Many languages are awesome.  I'm always astonished at what great
> things people have come up with, over the years; it's been a
> wonderfully fertile field.  We would certainly not be better off if
> we'd just buckled down, and used COBOL and FORTRAN... or even
> relatively good languages like C, APL, and Lisp.
> 
> It is certainly possible to change too lightly, for small reasons.
> That doesn't mean that forever enduring the same problems is a good
> idea.
> 
> On Tue, Sep 14, 2021 at 2:18 AM Rob Sargent 
> wrote:
> > On 9/13/21 11:51 PM, Guyren Howe wrote:
> > 
> > They are making a decent decision. SQL is a *fucking terrible*
> > language, which I don’t blame them for not wanting to learn.
> > 
> > The whole industry, programming languages, infrastructure,
> > everything would have developed differently if relations were a
> > natural, pleasurable thing to use in any programming language. Like
> > an Array, or a Hash.
> > On Sep 13, 2021, 22:45 -0700, Hemil Ruparel <
> > hemilruparel2...@gmail.com>, wrote:
> > 
> > SQL is not the problem. Problem are the devs. I love SQL. I hate
> > orms. The problem with databases is people refuse to treat it as
> > the entity it is and want to use their beautiful OO system. Problem
> > is databases are not OO. We need to recognize that and treat
> > databases as databases.
> > 
> > All languages are fucking terrible.  There are thousands of the
> > them because some people bump into a feature they don't like and
> > run off an make another fucking terrible language.  For the love of
> > God, please don't be one of those people.  The rest of us find
> > languages we can abide and do productive things with using features
> > we like and avoiding those we don't.  I've always felt it was no
> > small miracle the vendors managed to agree to ODBC/JDBC driver
> > specs (even though the SQL language definition is "more like
> > guidelines").  Go scream at the DOM and JavaScript.
> 
> 






curious cost vs execution time (local test)

2021-03-09 Thread Wim Bertels
Hello,

Using the chinook database
https://github.com/lerocha/chinook-database/blob/master/ChinookDatabase/DataSources/Chinook_PostgreSql.sql
locally,
so there should be little side-effects (network, host, cache..)
for the following two equivalent queries,
the cost and execution times are curious.

(tip if you have problems on linux psql for the chinook database
#set client_encoding to 'windows-1251';
)

Can anyone confirm or correct this?

The equivalent queries are

SELECT r1."Title",hs."Name",hs."Milliseconds"
FROM "Album" r1 LEFT JOIN LATERAL
 (SELECT *
 FROM "Track" b2
natural inner join "Album"
 WHERE r1."AlbumId" = b2."AlbumId"
 ORDER BY "Milliseconds" desc
 FETCH FIRST 2 ROW ONLY) hs
ON true;

vs

SELECT "Title", "Name", "Milliseconds"
FROM "Album" A NATURAL INNER JOIN "Track" T
WHERE 1 >= (SELECT COUNT(*)
   FROM "Album" NATURAL INNER JOIN "Track"
   WHERE "Album"."Title" = A."Title"
   AND "Track"."Milliseconds" < T."Milliseconds");



Running explain (analyze) a few times gives the following curious
result:

With the Lateral join query:
   
  QUERY
PLAN   
  
-
-
--
 Nested Loop Left Join  (cost=16.37..5701.03 rows=694 width=43) (actual
time=0.043..7.347 rows=612 loops=1)
   ->  Seq Scan on "Album" r1  (cost=0.00..6.47 rows=347 width=27)
(actual time=0.006..0.022 rows=347 loops=1)
   ->  Limit  (cost=16.37..16.37 rows=2 width=856) (actual
time=0.021..0.021 rows=2 loops=347)
 ->  Sort  (cost=16.37..16.39 rows=10 width=856) (actual
time=0.021..0.021 rows=2 loops=347)
   Sort Key: b2."Milliseconds" DESC
   Sort Method: quicksort  Memory: 25kB
   ->  Nested Loop  (cost=0.28..16.27 rows=10 width=856)
(actual time=0.009..0.019 rows=10 loops=347)
 ->  Seq Scan on "Album"  (cost=0.00..7.34 rows=1
width=4) (actual time=0.008..0.016 rows=1 loops=347)
   Filter: ("AlbumId" = r1."AlbumId")
   Rows Removed by Filter: 346
 ->  Index Scan using "IFK_TrackAlbumId" on "Track"
b2  (cost=0.28..8.83 rows=10 width=24) (actual time=0.001..0.002
rows=10 loops=347)
   Index Cond: ("AlbumId" = r1."AlbumId")
 Planning Time: 0.094 ms
 Execution Time: 7.388 ms



With the correlated subquery (where):
QUE
RY
PLAN   
  
-
-
-
 Hash Join  (cost=10.81..148.79 rows=1168 width=43) (actual
time=0.109..80.502 rows=612 loops=1)
   Hash Cond: (t."AlbumId" = a."AlbumId")
   Join Filter: (1 >= (SubPlan 1))
   Rows Removed by Join Filter: 2891
   ->  Seq Scan on "Track" t  (cost=0.00..80.03 rows=3503 width=24)
(actual time=0.003..0.193 rows=3503 loops=1)
   ->  Hash  (cost=6.47..6.47 rows=347 width=27) (actual
time=0.054..0.055 rows=347 loops=1)
 Buckets: 1024  Batches: 1  Memory Usage: 29kB
 ->  Seq Scan on "Album" a  (cost=0.00..6.47 rows=347 width=27)
(actual time=0.003..0.025 rows=347 loops=1)
   SubPlan 1
 ->  Aggregate  (cost=16.23..16.24 rows=1 width=8) (actual
time=0.022..0.022 rows=1 loops=3503)
   ->  Nested Loop  (cost=0.28..16.22 rows=3 width=0) (actual
time=0.011..0.022 rows=7 loops=3503)
 ->  Seq Scan on "Album"  (cost=0.00..7.34 rows=1
width=4) (actual time=0.008..0.018 rows=1 loops=3503)
   Filter: (("Title")::text = (a."Title")::text)
   Rows Removed by Filter: 346
 ->  Index Scan using "IFK_TrackAlbumId" on
"Track"  (cost=0.28..8.85 rows=3 width=4) (actual time=0.001..0.003
rows=7 loops=3503)
   Index Cond: ("AlbumId" = "Album"."AlbumId")
   Filter: ("Milliseconds" < t."Milliseconds")
   Rows Removed by Filter: 8
 Planning Time: 0.206 ms
 Execution Time: 80.561 ms


Summarized:

Lateral:  (cost=16.37..5701.03 rows=694 rows=1168) (actual
time=0.043..7.347 )
C Subq: (cost=10.81..148.79 rows=1168 width=43) (actual
time=0.109..80.502)
with the actual amount of rows for this queries being 612,
so the amount of rows for the C Subq is not well estimated.

Nonetheless in this case, the most expensive query here, is the fastest
one.

(PS: Analyze was run on the tables prior to the queries)


-- 
mvg,
Wim Bertels
--
https://coronalert.be

--
"Not Hercules could have knock'd out his brains, for he had none."
-- Shakespeare



Re: Import data from one instance another

2021-03-09 Thread Wim Bertels
Hello Rama,

not quiet sure what the question is,
you can choose the name of the foreign table
https://www.postgresql.org/docs/current/postgres-fdw.html

Rama Krishnan schreef op di 09-03-2021 om 14:47 [+0530]:
> Hi Team, 
> 
> I have two instances called A and B, in A instances I have one table
> name called sports I want to import data from A into B instances I
> found foreign data wrapper is the best solution when the table is
> huge here I have doubt is it foreign data wrapper and foreign tables
> are same?
> 
> 
> Thanks
> 
> Rama Krishnan
-- 
mvg,
Wim Bertels
--
https://coronalert.be

Lector
UC Leuven-Limburg
--
"... an experienced, industrious, ambitious, and often quite often
picturesque liar."
-- Mark Twain





Re: psql > split > queries & output

2020-12-04 Thread Wim Bertels
Daniel Verite schreef op do 03-12-2020 om 16:32 [+0100]:
>   Wim Bertels wrote:
> 
> > but interactively this doesn't seem to work
> 
> Interactively something like this might do the job:
> 
> $  PSQL_HISTORY=/tmp/queries.txt psql  -L /tmp/logfile.txt
> 
> After running some queries, and a clean exit from psql:

yes indeed,
but as i mentioned in the beginning,
i can't consult the current history in a side pane,
only with option /s
(while still running the psql session)

(side note:
$ echo $PSQL_HISTORY (is empty)

$ psql
psql (12.5 (Debian 12.5-1.pgdg100+1))
Type "help" for help.
# \set HISTFILE /tmp/test
# select 1;

has no effect, /tmp/test is not created,
everything is put into ~.psql_history )

(debian specific?, intended?, or bug?)


> 
> $ cat /tmp/queries.txt
> select 1;
> select 2;
> 
> 
> $ cat /tmp/logfile.txt 
> * QUERY **
> select 1;
> **
> 
>  ?column? 
> --
>   1
> (1 row)
> 
> * QUERY **
> select 2;
> **
> 
>  ?column? 
> --
>   2
> (1 row)
> 
> 
> Best regards,
-- 
mvg,
Wim 
--
https://coronalert.be

--
In India, "cold weather" is merely a conventional phrase and has come into
use through the necessity of having some way to distinguish between weather
which will melt a brass door-knob and weather which will only make it mushy.
-- Mark Twain





Re: psql > split > queries & output

2020-12-03 Thread Wim Bertels
Daniel Verite schreef op do 03-12-2020 om 15:18 [+0100]:
>   Wim Bertels wrote:
> 
> > * one pipe (or..) to a file which only contains the statements
> > (queries) history (and not the output)
> > * another pipe (or..) to a file which only contains the output
> > (with or
> > without the statements) history
> 
> Consider that script:
> 
> $ cat f.sql
> \set ECHO queries
> \o output.txt
> select 1;
> select 2;
> select 3;
> 
> If it's run with
> 
> $ psql -f f.sql >queries.txt
> 
> then it produces the queries in queries.txt and the output in
> output.txt


thank you,
i use the same with a input file,
but interactively this doesn't seem to work

ie 
psql > queries.txt (does not work (stdout))
(psql 2> queries.txt does work (stderr))
(psql &> queries.txt does not work (both))

--

psql -f - > out 

does work interactively,
but no fun experience :) (no completion, and the separation does not
seem to work)

> 
> 
> Best regards,
-- 
mvg,
Wim 
--
https://coronalert.be

--
If you laid all of our laws end to end, there would be no end.
-- Mark Twain





psql > split > queries & output

2020-12-03 Thread Wim Bertels
Hello,

psql shows (the queries and) the output,
there are for example the -L option and echo_queries option

is there a way to run psql so that:
* inside psql there are no surprises :) (ignore this line:)
* one pipe (or..) to a file which only contains the statements
(queries) history (and not the output)
* another pipe (or..) to a file which only contains the output (with or
without the statements) history

why? for example, 
to have easy access in a psql   demonstration to past commands
(history),
without the output cluttering or overwhelming everything, side by side
on the screen. (optionally with a third pane for the output)

there is also the /s option in psql,
but i guess the history is only written to file when psql is closed,
and is not accessible during the session (only in memory?),
otherwise this would also be an option
(not ctrl+r)

suggestions or alternatives while using the commandline?

-- 
mvg,
Wim 
--
https://coronalert.be


--
Always do right.  This will gratify some people and astonish the rest.
-- Mark Twain





Re: SSL between Primary and Seconday PostgreSQL DBs

2020-09-03 Thread Wim Bertels
First thougt: A general solution that you could use is wireshark.

For example : Search for wireshark ssl traffic


Susan Joseph  schreef op September 3, 2020 10:54:36 AM 
UTC:
>OK, I understand I was just hoping someone could confirm that my
>settings are correct.  
>I didn't come across an error so everything seems to be working I just
>can't verify that SSL is working.  
>Are there any commands you can run to verify that SSL is up and
>operational? 
>Testing from a client to the database doesn't prove that database to
>database is working.   
> 
>Susan Joseph
>sandajos...@verizon.net
> 
> 
>-Original Message-
>From: Peter Eisentraut 
>To: Susan Joseph ;
>pgsql-gene...@postgresql.org 
>Sent: Thu, Sep 3, 2020 1:01 am
>Subject: Re: SSL between Primary and Seconday PostgreSQL DBs
>
>On 2020-08-27 12:57, Susan Joseph wrote:
>> So has no one done this before?
>
>I'm sure people have done this.  But I suggest that if you ask a 
>question on this mailing list, you ask something more concrete, like, I
>
>tried to do this, and got stuck here, and tried this and got this
>error. 
>  People can help with that sort of thing.  What we have here is a 
>complex security setup and you are asking people to do an open-ended 
>review.  No one wants to do that.
>
>> -Original Message-
>> From: Susan Joseph 
>> To: pgsql-gene...@postgresql.org 
>> Sent: Mon, Aug 24, 2020 10:10 am
>> Subject: SSL between Primary and Seconday PostgreSQL DBs
>> 
>> I have setup a Primary and Secondary PostgreSQL DBs.  They were setup
>up 
>> with basic replication then I went back and modified them to use
>SSL.  I 
>> am just not sure if I did it correctly.  Everything is working but I 
>> want to make sure I have the settings correctly.  I am using
>PostgreSQL 
>> 11.2.
>> 
>>  * I have a PKI that I stood up so I issued 2 server certificates one
>>    for each database from my CA.
>>  * Primary server certificate - Primary Database
>>      o The FQDN and IP address are set in the SAN field.
>>      o FQDN is also the CN in the DN
>>      o Key Usage is set to Digital Signature and Key encipherment
>>      o EKU is set to Server Authentication and Client Authentication
>>  * Rep_user certificate - Secondary Database
>>      o CN is set to the rep_user account name
>>      o Key Usage is set to digital signature and key encipherment
>>      o EKU is set to client authentication
>>  * Each certificate file contains the certificate and the subCA
>>    certificate who issued the certificate and put in a file called
>>    server.crt for the Primary and client.crt for the secondary.
>>  * The key for each certificate is stored in a separate file
>>    unencrypted (I have questions about this later on) in a file
>called
>>    server.key and client.key
>>  * The server.crt, server.key, and root.crt are put onto the primary
>>    database server in the /data/pgsql/data location, the owner and
>>    group of these files is set to postgres
>>  * The client.crt, client.key, and root.crt are put onto the primary
>>    database server in the /data/pgsql/data location, the owner and
>>    group of these files is set to postgres
>>  * On the Primary in postgresql.conf I set:
>>      o ssl=on
>>      o ssl_ca_file='root.crt'
>>      o ssl_cert_file='server.crt'
>>      o ssl_key_file='server.key'
>>      o ssl_ciphers='HIGH:MEDIUM:+3DES:!aNULL'
>>  * On the Primary in pg_hba.conf I add a replication line:
>>      o hostssl    replication        
>>        rep_user  cert
>>  * On the Secondary I set the following information in the
>>    postgresql.conf to:  (DO I NEED TO DO THIS??)
>>      o ssl=on
>>      o ssl_ca_file='root.crt'
>>      o ssl_cert_file='client.crt'
>>      o ssl_cert_fkey='client.key'
>>      o ssl_ciphers='HIGH:MEDIUM:+3DES:!aNULL'
>>  * On the Secondary I edit the recovery.conf file to the following:
>>      o primary_conninfo = 'user=rep_user passfile=''/data/.pgpass''
>>        host= port=5432 sslmode=verify-ca
>>        sslcert=client.crt sslkey=client.key sslcompression=0
>>        target_session_attrs=any'
>>  * On the Secondary I edit the pg_hba.conf file and change the
>rep_user
>>    line to:
>>      o hostssl  replication rep_user  
>>        IP>/32  cert clientcert=1
>>  * On the Secondary I move the root.crt to
>/data/pgsql/data/.postgresql
>>  * Then I restart the databases
>> 
>> 
>> My questions are:
>> 
>>  * Do I need to set the information in the Secondary postgresql.conf?
>
>>    Originally I did not set this and everything worked but I saw
>errors
>>    in my log files that said to do SSL these needed to be set so I
>went
>>    back and set them.  Are there pgsql commands I can run to test
>that
>>    my SSL is working in both directions?
>>  * Are my pg_hba.conf files set correctly?  Is that how you get SSL
>>    "turned on" for communications between the primary and the
>rep_user
>>    account?
>>  * If I leave my key file encrypted then eve

Re: Capturing just slow queries

2020-07-17 Thread Wim Bertels
You might have a look at:
https://www.postgresql.org/docs/current/auto-explain.html

Also there are several monitoring solutions:
https://wiki.postgresql.org/wiki/Monitoring


Tiffany Thang schreef op do 16-07-2020 om 13:41 [-0400]:
> Hi,
> log_min_duration_statement captures all statements including DMLs
> that have exceeded the threshold. Is there a way in PG 12 to capture
> just select statements excluding all DMLs and DDLs? In my
> environment, it's acceptable for DMLs and DDLs to cross the threshold
> and we are more interested in capturing poor performing select
> statements. 
> 
> Thanks.
> 
> Tiff 
-- 
mvg,
Wim Bertels
--
Lector
UC Leuven-Limburg
--
There is no hunting like the hunting of man, and those who have hunted
armed men long enough and liked it, never care for anything else thereafter.
-- Ernest Hemingway





Re: Is PostgreSQL SQL Database Command Syntax Similar to MySQL/MariaDB?

2020-03-30 Thread Wim Bertels
Turritopsis Dohrnii Teo En Ming schreef op ma 30-03-2020 om 20:49
[+0800]:
> Good evening from Singapore,
> 
> Is PostgreSQL SQL database command syntax similar to MySQL/MariaDB?

Unfortunately mysql/mariadb syntax differs in general more than others
dbms from the iso/ansi sql standard.

> 
> Because I have never used PostgreSQL before.

So you might have to adapt a bit when leaving mysql/mariadb for another
dbms. To start development you should be ok,
if you want to migrate data, you might want to use a migration tool (or
fdw) (instead of dumping mysql syntax into postgresql)

> 
> I am looking forward to hearing from you soon.
> 
> Thank you.
> 
> 
> 
> 
> 
> 
> 
> -BEGIN EMAIL SIGNATURE-
> 
> The Gospel for all Targeted Individuals (TIs):
> 
> [The New York Times] Microwave Weapons Are Prime Suspect in Ills of
> U.S. Embassy Workers
> 
> Link: 
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.nytimes.com%2F2018%2F09%2F01%2Fscience%2Fsonic-attack-cuba-microwave.html&data=02%7C01%7Cwim.bertels%40ucll.be%7Cb58a22dd7cca44dbff4f08d7d4a8c9d5%7Ce638861b15d94de6a65db48789ae1f08%7C0%7C0%7C637211694717781909&sdata=jjYYIaR9%2BT%2Bn5ZnZ10asrNYy29%2FOH1diXRa%2Fz73%2FeCc%3D&reserved=0
> 
> *
> ***
> 
> Singaporean Mr. Turritopsis Dohrnii Teo En Ming's Academic
> Qualifications as at 14 Feb 2019 and refugee seeking attempts at the 
> United Nations Refugee Agency Bangkok (21 Mar 2017), in Taiwan (5
> Aug 
> 2019) and Australia (25 Dec 2019 to 9 Jan 2020):
> 
> [1] 
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftdtemcerts.wordpress.com%2F&data=02%7C01%7Cwim.bertels%40ucll.be%7Cb58a22dd7cca44dbff4f08d7d4a8c9d5%7Ce638861b15d94de6a65db48789ae1f08%7C0%7C0%7C637211694717781909&sdata=7y%2F8jlQS2YBeXw%2BZ6114Adl7Vsh73teL8PmNWZd%2Fc8s%3D&reserved=0
> 
> [2] 
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftdtemcerts.blogspot.sg%2F&data=02%7C01%7Cwim.bertels%40ucll.be%7Cb58a22dd7cca44dbff4f08d7d4a8c9d5%7Ce638861b15d94de6a65db48789ae1f08%7C0%7C0%7C637211694717781909&sdata=Z70%2BW5Q7OUXswXw5j4Lt85W2DyLSycu8d4FCc1s8pq8%3D&reserved=0
> 
> [3] 
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.scribd.com%2Fuser%2F270125049%2FTeo-En-Ming&data=02%7C01%7Cwim.bertels%40ucll.be%7Cb58a22dd7cca44dbff4f08d7d4a8c9d5%7Ce638861b15d94de6a65db48789ae1f08%7C0%7C0%7C637211694717781909&sdata=Ug%2BqnOPc5bB4g1LrECXLIz8EOUKlvdDXF10%2BcUTfxTI%3D&reserved=0
> 
> -END EMAIL SIGNATURE-
> 
> 
-- 
mvg,
Wim Bertels
--
Lector
UC Leuven-Limburg
--
For years a secret shame destroyed my peace--
I'd not read Eliot, Auden or MacNiece.
But now I think a thought that brings me hope:
Neither had Chaucer, Shakespeare, Milton, Pope.
-- Justin Richardson.





Re: Conditional return of aggregated data

2019-12-02 Thread Wim Bertels
Hallo Laura,

i don't know if i understand your question fully,
but this might be helpfull?: FILTER

SELECT
count(*) AS unfiltered,
count(*) FILTER (WHERE i < 5) AS filtered
FROM generate_series(1,10) AS s(i);
 unfiltered | filtered
+--
 10 |4
(1 row)

https://www.postgresql.org/docs/current/sql-expressions.html

hth,
Wim

Laura Smith schreef op ma 02-12-2019 om 11:10 [+]:
> Hi,
> 
> I have some aggregated statistics which are currently being queried
> as follows:
> 
> create view stats_age as
> SELECT a.category,
>a.statcount,
>b.a_desc
> FROM reg_aggregated_stats a,regs_select_age b where a.category='age'
> and a.datapoint::smallint=b.a_val order by a.datapoint asc;
> 
> However, as these relate to event registrations, a suggestion has
> been made that the database really should be returning nothing until
> a certain number of registrations has been met (in order to avoid
> privacy infringing inferrence from what should be an otherwise fully
> irreversibly aggregated dataset).
> 
> Now, the queries are wrapped in PL/PGSQL functions anyway, so I could
> make a second call to Postgres to find out sum(statcount) and then
> conditionally return based on that.
> 
> But is there a smarter way to do this out of a single SQL query ?
> 
> My initial idea was something along the lines of :
>  select (select sum(statcount) from stats_residence) as
> aggstat,statcount,short_name_en from stats_residence where
> aggstat>some_number;
> 
> But as I soon discovered that's not valid syntax! Hence ideas welcome
> from those smarter than me.
> 
> Thanks !
> 
> Laura
> 
> 
-- 
mvg,
Wim Bertels
--
Lector
UC Leuven-Limburg
--
My only love sprung from my only hate!
Too early seen unknown, and known too late!
-- William Shakespeare, "Romeo and Juliet"



Re: Is my lecturer wrong about PostgreSQL? I think he is!

2019-10-10 Thread Wim Bertels


> 
> Now, I have four questions:
> 
> 1) Is my lecturer full of it or does he really have a point?

Hallo Pol,

i don't know, a also teaching a databases,
personally i never experienced this

sometimes people are really a fan of certain product,
sometimes in combination with the thought that all other products are
bad; i don't know if this is the case, you could compare it with
soccer, a barcalona fan will never become a real madrid fan and vice
versa; so "rational" decisions (at first, based on some reasoning) tend
to get loaded emotional feelings.

in these kind of discussions both parties should have there say,
not just one (in this case you, we haven't heard the teacher) 

there is no such thing as a perfect dbms,
nor postgres, nor oracle, nor ..,

> 
> 2) The actual concrete acknowledged problem with fsync that affected
> PostgreSQL - why didn't it affect Oracle? Or MySQL? Or did it but it
> was so rare that it never became apparent - it wasn't that obvious
> with PostgreSQL either - one of those rare and intermittent problems?

you can set fsync to off (not default), for more performance,
but it comes with the cost of D in ACID, you no longer have it 

> 
> 3) Were there ever any problems with BSD?

as far as i understand BSD and variants are very solid,
so good for server use, not for desktop 

> 
> 4) What is the OS of choice for *_serious_* PostgreSQL installations?

it depends,

if transparancy is important to you, choose an opensource os
if time is important to you, choose what you already know
if you are willing to spent time on it, i would personally choose a
linux, bsd or solaris based os

if it helps:
i my case, i had to make a decision about the dbms for the classes as
well, the reasons i have choosen postgres are in a nutshell:
* free
* open
* runs good on servers that are comparable with an average desktop pc
or better
* close to ISO sql standard (the reason why i didn't choose mysql/now
mariadb)
* seems to have a future
-* within all these, postgres seems to have implemented most features
* after using it for a while (18 years now), i should now add: a great
community

some links:
https://www.top500.org/
https://en.wikipedia.org/wiki/DB-Engines_ranking
https://db-engines.com/en/ranking

> 
> I hope that I have been clear, but should anyone require any
> clarification, please don't hesitate to ask me.
> 
> Tia and rgs,
> 
> Pól...
> 
> 
-- 
mvg,
Wim Bertels
--
Lector
UC Leuven-Limburg
--
The human race has one really effective weapon, and that is laughter.
-- Mark Twain




Re: Case Insensitive Comparison with Postgres 12

2019-10-09 Thread Wim Bertels
Using the datatype citext might be an alternative solution

Igal Sapir  schreef op October 8, 2019 10:51:52 PM UTC:
>I am trying to test a simple case insensitive comparison.  Most likely
>the
>collation that I chose is wrong, but I'm not sure how to choose the
>correct
>one (for English/US?).  Here is my snippet:
>
>create collation case_insensitive(
>provider=icu, locale='en-US-x-icu', deterministic=false
>);
>select 'Abc' = 'abc' collate case_insensitive;
>
>I expected true but am getting false.
>
>Any thoughts?
>
>Thanks,
>
>Igal

-- 
Verstuurd vanaf een fairphone met K-9 Mail. Excuseer mijn beknoptheid.

Re: RowDescription message

2019-10-07 Thread Wim Bertels
Or
CREATE TABLE t();

SELECT *
FROM t;

Van: Tatsuo Ishii 
Verzonden: maandag 7 oktober 2019 10:00
Aan: pgsql-gene...@postgresql.org 
Onderwerp: RowDescription message

According to the manualof RowDescription message
https://www.postgresql.org/docs/12/protocol-message-formats.html

  Specifies the number of fields in a row (can be zero).

Does 0 fields could actually happen?

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp




Re: Camel case identifiers and folding

2019-03-15 Thread Wim Bertels
Steve Haresnape schreef op vr 15-03-2019 om 12:06 [+1300]:
> I'm porting a sql server database to postgresql 9.6. My camelCase
> identifiers are having their humps removed. This is disconcerting and
> sad.

the SQL ISO defines exactly this behaviour (as you note),
so this is be expected

suggestion : instead of CamelCase use seperate_lower_case_by_underscore

You don't like this notation, maybe this can help.
It is suggested that is more relaxing for the brain, our brain tends to
focus on the Capitals first.
Reasoning: So scanning a literal like dogFeedFood first goes the F of
Feed and Food and only then goes back to dog.

Changing habits can be hard, unfortunately.

"" > literal
'' > string

-- 
mvg,
Wim
--
Lector
UC Leuven-Limburg
--
O, it is excellent
To have a giant's strength; but it is tyrannous
To use it like a giant.
-- Shakespeare, "Measure for Measure", II, 2