Re: Multitenent architecture

2020-06-05 Thread Rob Sargent



> On Jun 5, 2020, at 9:57 PM, Vasu Madhineni  wrote:
> 
> 
> Hi Rob,
> 
> Our environment is medical clinical data, so each clinic as a tenant. 
> Approximately 500+ tenants with 6TB data.

> .
How quickly are both those numbers growing? What are your cross clinic query 
requirements ? 
Any single table dominating storage size?



Logical replication - ERROR: could not send data to WAL stream: cannot allocate memory for input buffer

2020-06-05 Thread Aleš Zelený
Hello,

we are using logical replication for more than 2 years and today I've found
new not yet know error message from wal receiver. The replication was in
catchup mode (on publisher side some new tables were created and added to
publication, on subscriber side they were missing).

RDBMS version:PostgreSQL 11.4 (Ubuntu 11.4-1.pgdg18.04+1) on
x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0,
64-bit
OS: Ubuntu 18.04.2 LTS

RDBMS was installed from pgdg packages.

The error message:
2020-06-05 20:00:08 UTC 19753 5edaa378.4d29 20 540/1132087
[XX000]:ERROR:  could not send data to WAL stream: cannot allocate memory
for input buffer
2020-06-05 20:00:08 UTC 867 5df8a0b4.363 286130[0]:LOG:
 background worker "logical replication worker" (PID 19753) exited with
exit code 1

Repeated several times, finally it proceeded and switch into streaming
state. The OS has 64GB RAM, OS + database instance are using usually 20GB
rest is used as OS buffers. I've checked monitoring (sampled every 10
seconds) and no memory usage peak was visible, so unless it was a very
short memory usage peak, I'd not expect the system running out of memory.

Is there something I can do to diagnose and avoid this issue?

Thanks Ales


Re: Logical Replication and table bloat

2020-06-05 Thread Peter Eisentraut

On 2020-06-05 21:53, Martín Fernández wrote:

Yesterday we stumbled upon a performance issue that we were not expecting. We 
are replicating our database using AWS DMS which uses logical replication to 
capture changes. We have some hot tables that get updated very regularly and 
with the DMS turned on we started noticing that in those table, table bloat 
increased considerably ~15 times more free_tuples than the average.

When doing logical replication, the subscriber will hold the tuples that could 
be flagged for reuse until they are sent ? Just trying to understand a little 
bit better how the logical replication is affecting the vacuuming.


As far as vacuum is concerned, it is very similar to a normal client 
session: It may insert tuples, update tuples, delete tuples; update and 
delete create bloat, autovacuum should come along to clean up.  There 
isn't normally any separate vacuum tuning necessary for this, but if you 
are experiencing issues, first treat it like a normal vacuum 
configuration problem.


--
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Possible improvement

2020-06-05 Thread Paul van der Linden
Ok, as always there's a lot more to take into account then when just
superficially looking at it.
And indeed your counterexample shows that you'd have to include all the
previous when-conditions too as false
WHERE x=0 IS DISTINCT FROM true AND 1/x > 100, which could become quite
messy (especially with nested cases)


On Fri, Jun 5, 2020 at 9:02 PM Tom Lane  wrote:

> Paul van der Linden  writes:
> > For the case where it isn't known if the case expression itself is
> indexed,
> > technically that should be added as a decision-node in the query planner.
>
> That'd be fairly hard to do, if we're regarding this as an expression
> simplification step, since expression simplification is run long before
> any consideration is given to indexes.  (Even if we were willing to
> contemplate reversing that ordering, it'd be hard to do, because we
> need the simplified expressions to compare to index expressions ---
> else we'd get fooled by irrelevant discrepancies that simplification
> is supposed to remove.)
>
> The alternative is to try to wire this into index path generation instead
> of treating it as a general-purpose expression simplification ... but that
> likewise seems pretty undesirable.  If you've got a case like this, you'd
> like it to be simplified whether it ends up as an indexqual or not.
>
> So, as I said, I'm inclined to dismiss David's complaint as an
> impracticable requirement.  The other issues I raised are far more
> significant.
>
> BTW, speaking of correctness, this seems like a pretty dire
> counterexample:
>
> SELECT ... FROM
>(SELECT CASE WHEN x = 0 THEN 'zero'
> WHEN 1/x > 100 THEN 'tiny'
> ELSE 'whatever' END AS class,
>...
>) ss
> WHERE ss.class = 'tiny';
>
> Naive application of this transformation would convert the WHERE to
>
> WHERE 1/x > 100
>
> creating divide-by-zero failures where there should be none.
> I'm not sure how we get around that; in general the planner
> has little clue which operations can throw what errors.
>
> regards, tom lane
>


Re: Possible improvement

2020-06-05 Thread Paul van der Linden
Thanks for your thoughts.

For the case where it isn't known if the case expression itself is indexed,
technically that should be added as a decision-node in the query planner.
After all there are 2 possibilities to handle that so it should be up to
the planner to choose the cheapest.

Having said that, if the time spent planning the query is *that* critical I
agree that it probably isn't worth it.
Just that in my line of work the execution time of a query is a lot of
orders of magnitude larger than the planning time (my recordholder is a
query that runs for just over 3 days...)

On Fri, Jun 5, 2020 at 4:31 PM Tom Lane  wrote:

> David Rowley  writes:
> > On Fri, 5 Jun 2020 at 14:41, Paul van der Linden
> >  wrote:
> >> If I have a query like:
> >>
> >> SELECT * FROM (
> >> SELECT
> >> CASE
> >> WHEN field='value1' THEN 1
> >> WHEN field='value2' THEN 2
> >> END AS category
> >> FROM table1
> >> ) AS foo
> >> WHERE category=1
> >>
> >> doesn't use the index on field, while technically it could do that.
> >> Is it hard to implement drilling down the constant in the WHERE to
> within the CASE?
>
> > It doesn't look impossible to improve that particular case.  See
> > eval_const_expressions_mutator() in clauses.c at T_CaseExpr. However,
> > this would need to take constant folding further than we take it
> > today. Today we just have the ability to simplify expressions which
> > are, by themselves, an expression which will always evaluate to a
> > constant value. This case is more complex as it requires something
> > outside the CASE expr to allow the simplification to take place. In
> > this case, we'd need to look at the other side of the OpExpr to see
> > the const there before any transformation could simplify it.
>
> I'd tend to see this as a transformation rule that acts on equality-
> with-a-CASE-input, thereby avoiding the "action at a distance" problem.
>
> > It's
> > also not entirely clear that the simplification would always be a good
> > idea.  What, for example if there was an index on the case statement
> > but none on "field". The query may perform worse!
>
> FWIW, I'm not too fussed about that objection.  If we rejected new
> optimizations on the basis that somebody's optimized-for-the-old-way
> query might perform worse, almost no planner changes would ever get in.
> I think most people would feel that an optimization like this is an
> improvement.  (I recall coming across a similar case in an
> information_schema query just a few days ago.)  The hard questions
> I would ask are
> 1. Is the transformation actually correct?
> 2. Does it improve queries often enough to be worth the planning cycles
> expended to look for the optimization?
>
> As far as #1 goes, note that this CASE produces NULL if "field" is
> neither 'value1' nor 'value2', whereupon the equality operator would
> also produce NULL, so that simplifying to "field='value1'" is not
> formally correct: that would produce FALSE not NULL for other values
> of "field".  We can get away with the replacement anyway at the top
> level of WHERE, but not in other contexts.  Hence, it'd be wrong to
> try to make this transformation in eval_const_expressions(), which is
> applied to all expressions.  Possibly prepqual.c's canonicalize_qual()
> would be a better place.
>
> The real problem here is going to be objection #2.  The rules under
> which any optimization could be applied are nontrivial, so that we'd
> spend quite a bit of time trying to figure out whether the optimization
> applies ... and I'm afraid that most of the time it would not.
>
> regards, tom lane
>


Re: Logical Replication and table bloat

2020-06-05 Thread Adrian Klaver

On 6/5/20 12:53 PM, Martín Fernández wrote:

Hello,

Yesterday we stumbled upon a performance issue that we were not expecting. We 
are replicating our database using AWS DMS which uses logical replication to 
capture changes. We have some hot tables that get updated very regularly and 
with the DMS turned on we started noticing that in those table, table bloat 
increased considerably ~15 times more free_tuples than the average.

When doing logical replication, the subscriber will hold the tuples that could 
be flagged for reuse until they are sent ? Just trying to understand a little 
bit better how the logical replication is affecting the vacuuming.


A question for the AWS folks:

https://forums.aws.amazon.com/forum.jspa?forumID=60



Thanks before hand!

Best,
Martín




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Oracle vs. PostgreSQL - a comment

2020-06-05 Thread Ron

On 6/5/20 10:02 AM, Achilleas Mantzios wrote:

On 5/6/20 5:19 μ.μ., Thomas Kellerer wrote:

Achilleas Mantzios schrieb am 05.06.2020 um 14:05:
Plus PG does not directly support cross database queries using 3 part 
name, something

sqlserver excels at.

Maybe because SQL server does not have real databases but schemas instead ?
This sucks security wise.

That is wrong.

SQL Server has both: databases and schemas and it allows for standard 
compliant catalog.schema.table references.


I think you are confusing that with MySQL where a schema and a database 
are the same thing
I think that you are confusing what you think I am confusing. I am talking 
about our MS SQL installation here at work. Not done by me anyways I am 
the pgsql guy, but still do most of the query stuff in ms sql as well.

I wrote already that my comment was far fetched in a subsequent email.


I manage SQL Server instances with multiple databases, each with multiple 
schemas.


--
Angular momentum makes the world go 'round.




Logical Replication and table bloat

2020-06-05 Thread Martín Fernández
Hello,

Yesterday we stumbled upon a performance issue that we were not expecting. We 
are replicating our database using AWS DMS which uses logical replication to 
capture changes. We have some hot tables that get updated very regularly and 
with the DMS turned on we started noticing that in those table, table bloat 
increased considerably ~15 times more free_tuples than the average. 

When doing logical replication, the subscriber will hold the tuples that could 
be flagged for reuse until they are sent ? Just trying to understand a little 
bit better how the logical replication is affecting the vacuuming.

Thanks before hand!

Best,
Martín



Re: Index no longer being used, destroying and recreating it restores use.

2020-06-05 Thread Michael Lewis
Those row estimates are pretty far off.

Standard indexes and partial indexes don't get custom statistics created on
them, but functional indexes do. I wonder if a small function needs_backup(
shouldbebackedup, backupperformed ) and an index created on that function
would nicely alleviate the pain. I would expect PG12 and extended
statistics of type MCVs would bypass the need for that work around though.


Re: Possible improvement

2020-06-05 Thread Tom Lane
Paul van der Linden  writes:
> For the case where it isn't known if the case expression itself is indexed,
> technically that should be added as a decision-node in the query planner.

That'd be fairly hard to do, if we're regarding this as an expression
simplification step, since expression simplification is run long before
any consideration is given to indexes.  (Even if we were willing to
contemplate reversing that ordering, it'd be hard to do, because we
need the simplified expressions to compare to index expressions ---
else we'd get fooled by irrelevant discrepancies that simplification
is supposed to remove.)

The alternative is to try to wire this into index path generation instead
of treating it as a general-purpose expression simplification ... but that
likewise seems pretty undesirable.  If you've got a case like this, you'd
like it to be simplified whether it ends up as an indexqual or not.

So, as I said, I'm inclined to dismiss David's complaint as an
impracticable requirement.  The other issues I raised are far more
significant.

BTW, speaking of correctness, this seems like a pretty dire
counterexample:

SELECT ... FROM
   (SELECT CASE WHEN x = 0 THEN 'zero'
WHEN 1/x > 100 THEN 'tiny'
ELSE 'whatever' END AS class,
   ...
   ) ss
WHERE ss.class = 'tiny';

Naive application of this transformation would convert the WHERE to

WHERE 1/x > 100

creating divide-by-zero failures where there should be none.
I'm not sure how we get around that; in general the planner
has little clue which operations can throw what errors.

regards, tom lane




Re: Postgres12 - Confusion with pg_restore

2020-06-05 Thread Tom Lane
Laura Smith  writes:
> But doesn't the second half of my original post demonstrate that I tried that 
> very thing  ?  I did try creating the database first, but pg_restore just 
> complained even more ?

There are two ways you can do this:

1. Create the new database by hand (with CREATE DATABASE) and tell
pg_restore to restore into it.  In this case you *don't* say -C
to pg_restore, and your -d switch points at the DB to restore into.

2. Have pg_restore issue CREATE DATABASE.  In this case you *do* use
the -C switch, and your -d switch has to point at some pre-existing
database that pg_restore can connect to for long enough to issue the
CREATE DATABASE.

You can't mix-and-match these approaches.

regards, tom lane




Re: Postgres12 - Confusion with pg_restore

2020-06-05 Thread Laura Smith




Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Friday, 5 June 2020 19:23, Christophe Pettus  wrote:

> > On Jun 5, 2020, at 11:20, Laura Smith n5d9xq3ti233xiyif...@protonmail.ch 
> > wrote:
> > sudo -u postgres pg_restore -v -C -d foobar 4_foobar_pgdump_Fc
>
> You need to connect to a database that already exists (such as "postgres"); 
> it then creates the database you are restoring and switches to it. The 
> relevant manual line is:
>
> "When (-C / --create) is used, the database named with -d is used only to 
> issue the initial DROP DATABASE and CREATE DATABASE commands. All data is 
> restored into the database name that appears in the archive."
>
>

But doesn't the second half of my original post demonstrate that I tried that 
very thing  ?  I did try creating the database first, but pg_restore just 
complained even more ?




Re: Postgres12 - Confusion with pg_restore

2020-06-05 Thread Christophe Pettus



> On Jun 5, 2020, at 11:20, Laura Smith  
> wrote:
> sudo -u postgres pg_restore -v -C -d foobar 4_foobar_pgdump_Fc

You need to connect to a database that already exists (such as "postgres"); it 
then creates the database you are restoring and switches to it.  The relevant 
manual line is:

"When (-C / --create) is used, the database named with -d is used only 
to issue the initial DROP DATABASE and CREATE DATABASE commands. All data is 
restored into the database name that appears in the archive."

--
-- Christophe Pettus
   x...@thebuild.com





Postgres12 - Confusion with pg_restore

2020-06-05 Thread Laura Smith
According to the all-mighty manual 
(https://www.postgresql.org/docs/current/app-pgrestore.html), life is supposed 
to be as simple as:

"To drop the database and recreate it from the dump:
$ dropdb mydb
$ pg_restore -C -d postgres db.dump"

The reality seems to be somewhat different ?

sudo -u postgres pg_restore -v -C -d foobar 4_foobar_pgdump_Fc
pg_restore: connecting to database for restore
pg_restore: error: connection to database "foobar" failed: FATAL:  database 
"foobar" does not exist

So I thought I would try to create the database manually first (CREATE DATABSE 
).  That made pg_restore even more angry:

sudo -u postgres pg_restore -C -d foobar 4_foobar_pgdump_Fc
pg_restore: while PROCESSING TOC:
pg_restore: from TOC entry 3088; 1262 43395 DATABASE foobar postgres
pg_restore: error: could not execute query: ERROR:  database "foobar" already 
exists
Command was: CREATE DATABASE foobar WITH TEMPLATE = template0 ENCODING = 'UTF8' 
LC_COLLATE = 'en_GB.UTF-8' LC_CTYPE = 'en_GB.UTF-8';


pg_restore: warning: errors ignored on restore: 1





Re: Index no longer being used, destroying and recreating it restores use.

2020-06-05 Thread Koen De Groote
The requested result: https://explain.depesz.com/s/G7mU

Also, the data from the statistic itself:

=> SELECT stxname, stxkeys, stxdependencies

->   FROM pg_statistic_ext

->   WHERE stxname = 's1';
 stxname | stxkeys | stxdependencies
-+-+-
 s1  | 29 35   | 


On Fri, Jun 5, 2020 at 4:15 PM Adrian Klaver 
wrote:

> On 6/5/20 7:05 AM, Koen De Groote wrote:
> > I've collected all relevant info(I think so at least) and put it here:
> >
> > The table in question is used to keep filepath data, of files on a
> > harddrive.
> > The query in question is used to retrieve items which should be backed
> > up, but have not yet been.
> >
> > The relevant columns of the table:
> >
> >  Table "public.item"
> > Column   |Type |
> > Collation | Nullable |   Default
> >
> +-+---+--+--
> >   id | bigint  |
> >| not null | nextval('item_id_seq'::regclass)
> >   shouldbebackedup   | boolean |
> >| not null | true
> >   backupperformed| boolean |
> >| not null | false
> >   itemCreated| timestamp without time zone |
> >|  | now()
> >   filepath   | text|
> >|  |
> >
> >
> > The existing index, which no longer gets used:
> > "index_in_question" btree (shouldbebackedup, backupperformed,
> > itemCreated, filepath) WHERE shouldbebackedup = true AND backupperformed
> > = false
> >
> > The new index, made out of the exact same columns and conditions, get
> > used immediately after creation:
> > CREATE INDEX CONCURRENTLY index_test ON item USING btree
> > (shouldbebackedup, backupperformed, itemCreated, filepath) WHERE
> > shouldbebackedup = true AND backupperformed = false;
> >
> >
> > The query in question will look something like this:
> > select * from item where shouldbebackedup=true and
> > itemCreated<='2020-06-05 00:00:00.000' and backupperformed=false order
> > by filepath asc, id asc limit 100 offset 10400;
>
> The result of EXPLAIN ANALYZE for above.
>
> >
> > Having done a count, there are around 13000 items here, without the
> > offset and limit.
> > That being said, the amount is entirely dependant on what was added on a
> > previous day.
> >
> >
> > I tried creating an extended statistic, like this, but it had no effect:
> > CREATE STATISTICS s1 (dependencies) ON shouldbebackedup, backupperformed
> > FROM item;
> >
> > Settings from the conf file I think are related:
> >
> > shared_buffers = 1024MB
> > effective_cache_size = 2048MB
> > random_page_cost = 1.1
> > effective_io_concurrency = 200
> > work_mem = 32MB
> >
> > Finally, I state again that this database gets a nightly "vacuum
> analyze".
> >
> > My thanks for looking at this and any suggestions one might have.
> >
> > Regards,
> > Koen
> >
> > On Thu, Jun 4, 2020 at 7:08 PM Tom Lane  > > wrote:
> >
> > Adrian Klaver  > > writes:
> >  > On 6/4/20 9:43 AM, Tom Lane wrote:
> >  >> It's possible that the index had bloated to the point where the
> > planner
> >  >> thought it was cheaper to use a seqscan.  Did you make a note of
> the
> >  >> cost estimates for the different plans?
> >
> >  > I missed the part where the OP pointed to a SO question. In that
> >  > question where links to explain.depesz.com
> >  output.
> >
> > Ah, I didn't bother to chase that link either.
> >
> > So the cost estimates are only a fraction of a percent apart, making
> > it unsurprising for not-so-large changes in the index size to cause
> > a flip in the apparently-cheapest plan.  The real question then is
> > why the cost estimates aren't actually modeling the real execution
> > times very well; and I'd venture that that question boils down to
> > why is this rowcount estimate so far off:
> >
> >  >->  Parallel Seq Scan on oscar mike_three
> >  > (cost=0.000..1934568.500 rows=2385585 width=3141) (actual
> >  > time=159.800..158018.961 rows=23586 loops=3)
> >  >Filter: (four AND (NOT bravo) AND (zulu <=
> >  > 'echo'::timestamp without time zone))
> >  >Rows Removed by Filter: 8610174
> >
> > We're not going to be able to answer that if the OP doesn't wish
> > to decloak his data a bit more ... but a reasonable guess is that
> > those filter conditions are correlated.  With late-model Postgres
> > you might be able to improve matters by creating extended statistics
> > for this table.
> >
> >   

Re: Binary encoding of TIMESTAMP WITH TIME ZONE

2020-06-05 Thread Peter J. Holzer
On 2020-06-04 20:32:51 -0400, Tom Lane wrote:
> Joe Abbate  writes:
> > However, when using the same query using the Rust adapter the transition 
> > to a new row started showing up after midgnight GMT.  I opened an issue 
> > on Github (https://github.com/sfackler/rust-postgres/issues/608 ) and 
> > the maintainer claimed the Rust adapter *had* to initialize timezone to 
> > UTC in order to properly convert "to and from time datatypes".  I 
> > pointed out that the timezone offset is available in psql and psycopg2, 
> > but then he replied the binary encoding of timestamptz does *not* 
> > include the timezone offset.
> 
> Indeed it does not, just as the on-disk format for it does not.  The
> representation is effectively always in UTC.  If you have some other
> timezone setting selected, timestamptz_out rotates to that zone for
> display purposes ... but the binary format doesn't.

However, the explanation still sounds off. I'm not familiar with Rust,
but I wouild expect the Rust time type to be based on Unix time_t or
some variant of it (maybe milliseconds as in Java, or nanoseconds or a
different epoch). That also doesn't include a timezone, so conversion
should be straightforward and not require any timezone to be involved.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Oracle vs. PostgreSQL - a comment

2020-06-05 Thread Achilleas Mantzios

On 5/6/20 5:19 μ.μ., Thomas Kellerer wrote:

Achilleas Mantzios schrieb am 05.06.2020 um 14:05:

Plus PG does not directly support cross database queries using 3 part name, 
something
sqlserver excels at.

Maybe because SQL server does not have real databases but schemas instead ?
This sucks security wise.

That is wrong.

SQL Server has both: databases and schemas and it allows for standard compliant 
catalog.schema.table references.

I think you are confusing that with MySQL where a schema and a database are the 
same thing
I think that you are confusing what you think I am confusing. I am talking about our MS SQL installation here at work. Not done by me anyways I am the pgsql guy, but still do most of the query stuff 
in ms sql as well.

I wrote already that my comment was far fetched in a subsequent email.






--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





Re: Possible improvement

2020-06-05 Thread Tom Lane
David Rowley  writes:
> On Fri, 5 Jun 2020 at 14:41, Paul van der Linden
>  wrote:
>> If I have a query like:
>> 
>> SELECT * FROM (
>> SELECT
>> CASE
>> WHEN field='value1' THEN 1
>> WHEN field='value2' THEN 2
>> END AS category
>> FROM table1
>> ) AS foo
>> WHERE category=1
>> 
>> doesn't use the index on field, while technically it could do that.
>> Is it hard to implement drilling down the constant in the WHERE to within 
>> the CASE?

> It doesn't look impossible to improve that particular case.  See
> eval_const_expressions_mutator() in clauses.c at T_CaseExpr. However,
> this would need to take constant folding further than we take it
> today. Today we just have the ability to simplify expressions which
> are, by themselves, an expression which will always evaluate to a
> constant value. This case is more complex as it requires something
> outside the CASE expr to allow the simplification to take place. In
> this case, we'd need to look at the other side of the OpExpr to see
> the const there before any transformation could simplify it.

I'd tend to see this as a transformation rule that acts on equality-
with-a-CASE-input, thereby avoiding the "action at a distance" problem.

> It's
> also not entirely clear that the simplification would always be a good
> idea.  What, for example if there was an index on the case statement
> but none on "field". The query may perform worse!

FWIW, I'm not too fussed about that objection.  If we rejected new
optimizations on the basis that somebody's optimized-for-the-old-way
query might perform worse, almost no planner changes would ever get in.
I think most people would feel that an optimization like this is an
improvement.  (I recall coming across a similar case in an
information_schema query just a few days ago.)  The hard questions
I would ask are
1. Is the transformation actually correct?
2. Does it improve queries often enough to be worth the planning cycles
expended to look for the optimization?

As far as #1 goes, note that this CASE produces NULL if "field" is
neither 'value1' nor 'value2', whereupon the equality operator would
also produce NULL, so that simplifying to "field='value1'" is not
formally correct: that would produce FALSE not NULL for other values
of "field".  We can get away with the replacement anyway at the top
level of WHERE, but not in other contexts.  Hence, it'd be wrong to
try to make this transformation in eval_const_expressions(), which is
applied to all expressions.  Possibly prepqual.c's canonicalize_qual()
would be a better place.

The real problem here is going to be objection #2.  The rules under
which any optimization could be applied are nontrivial, so that we'd
spend quite a bit of time trying to figure out whether the optimization
applies ... and I'm afraid that most of the time it would not.

regards, tom lane




Re: Oracle vs. PostgreSQL - a comment

2020-06-05 Thread Thomas Kellerer
Achilleas Mantzios schrieb am 05.06.2020 um 14:05:
>> Plus PG does not directly support cross database queries using 3 part name, 
>> something
>> sqlserver excels at.
>
> Maybe because SQL server does not have real databases but schemas instead ?
> This sucks security wise.

That is wrong.

SQL Server has both: databases and schemas and it allows for standard compliant 
catalog.schema.table references.

I think you are confusing that with MySQL where a schema and a database are the 
same thing




Re: Index no longer being used, destroying and recreating it restores use.

2020-06-05 Thread Adrian Klaver

On 6/5/20 7:05 AM, Koen De Groote wrote:

I've collected all relevant info(I think so at least) and put it here:

The table in question is used to keep filepath data, of files on a 
harddrive.
The query in question is used to retrieve items which should be backed 
up, but have not yet been.


The relevant columns of the table:

                                                     Table "public.item"
                Column               |            Type             | 
Collation | Nullable |                   Default

+-+---+--+--
  id                                 | bigint                      | 
       | not null | nextval('item_id_seq'::regclass)
  shouldbebackedup                   | boolean                     | 
       | not null | true
  backupperformed                    | boolean                     | 
       | not null | false
  itemCreated                        | timestamp without time zone | 
       |          | now()
  filepath                           | text                        | 
       |          |



The existing index, which no longer gets used:
"index_in_question" btree (shouldbebackedup, backupperformed, 
itemCreated, filepath) WHERE shouldbebackedup = true AND backupperformed 
= false


The new index, made out of the exact same columns and conditions, get 
used immediately after creation:
CREATE INDEX CONCURRENTLY index_test ON item USING btree 
(shouldbebackedup, backupperformed, itemCreated, filepath) WHERE 
shouldbebackedup = true AND backupperformed = false;



The query in question will look something like this:
select * from item where shouldbebackedup=true and 
itemCreated<='2020-06-05 00:00:00.000' and backupperformed=false order 
by filepath asc, id asc limit 100 offset 10400;


The result of EXPLAIN ANALYZE for above.



Having done a count, there are around 13000 items here, without the 
offset and limit.
That being said, the amount is entirely dependant on what was added on a 
previous day.



I tried creating an extended statistic, like this, but it had no effect:
CREATE STATISTICS s1 (dependencies) ON shouldbebackedup, backupperformed 
FROM item;


Settings from the conf file I think are related:

shared_buffers = 1024MB
effective_cache_size = 2048MB
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 32MB

Finally, I state again that this database gets a nightly "vacuum analyze".

My thanks for looking at this and any suggestions one might have.

Regards,
Koen

On Thu, Jun 4, 2020 at 7:08 PM Tom Lane > wrote:


Adrian Klaver mailto:adrian.kla...@aklaver.com>> writes:
 > On 6/4/20 9:43 AM, Tom Lane wrote:
 >> It's possible that the index had bloated to the point where the
planner
 >> thought it was cheaper to use a seqscan.  Did you make a note of the
 >> cost estimates for the different plans?

 > I missed the part where the OP pointed to a SO question. In that
 > question where links to explain.depesz.com
 output.

Ah, I didn't bother to chase that link either.

So the cost estimates are only a fraction of a percent apart, making
it unsurprising for not-so-large changes in the index size to cause
a flip in the apparently-cheapest plan.  The real question then is
why the cost estimates aren't actually modeling the real execution
times very well; and I'd venture that that question boils down to
why is this rowcount estimate so far off:

 >                ->  Parallel Seq Scan on oscar mike_three
 > (cost=0.000..1934568.500 rows=2385585 width=3141) (actual
 > time=159.800..158018.961 rows=23586 loops=3)
 >                        Filter: (four AND (NOT bravo) AND (zulu <=
 > 'echo'::timestamp without time zone))
 >                        Rows Removed by Filter: 8610174

We're not going to be able to answer that if the OP doesn't wish
to decloak his data a bit more ... but a reasonable guess is that
those filter conditions are correlated.  With late-model Postgres
you might be able to improve matters by creating extended statistics
for this table.

                         regards, tom lane




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Index no longer being used, destroying and recreating it restores use.

2020-06-05 Thread Koen De Groote
I've collected all relevant info(I think so at least) and put it here:

The table in question is used to keep filepath data, of files on a
harddrive.
The query in question is used to retrieve items which should be backed up,
but have not yet been.

The relevant columns of the table:

Table "public.item"
   Column   |Type |
Collation | Nullable |   Default
+-+---+--+--
 id | bigint  |
  | not null | nextval('item_id_seq'::regclass)
 shouldbebackedup   | boolean |
  | not null | true
 backupperformed| boolean |
  | not null | false
 itemCreated| timestamp without time zone |
  |  | now()
 filepath   | text|
  |  |


The existing index, which no longer gets used:
"index_in_question" btree (shouldbebackedup, backupperformed, itemCreated,
filepath) WHERE shouldbebackedup = true AND backupperformed = false

The new index, made out of the exact same columns and conditions, get used
immediately after creation:
CREATE INDEX CONCURRENTLY index_test ON item USING btree (shouldbebackedup,
backupperformed, itemCreated, filepath) WHERE shouldbebackedup = true AND
backupperformed = false;


The query in question will look something like this:
select * from item where shouldbebackedup=true and itemCreated<='2020-06-05
00:00:00.000' and backupperformed=false order by filepath asc, id asc limit
100 offset 10400;

Having done a count, there are around 13000 items here, without the offset
and limit.
That being said, the amount is entirely dependant on what was added on a
previous day.


I tried creating an extended statistic, like this, but it had no effect:
CREATE STATISTICS s1 (dependencies) ON shouldbebackedup, backupperformed
FROM item;

Settings from the conf file I think are related:

shared_buffers = 1024MB
effective_cache_size = 2048MB
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 32MB

Finally, I state again that this database gets a nightly "vacuum analyze".

My thanks for looking at this and any suggestions one might have.

Regards,
Koen

On Thu, Jun 4, 2020 at 7:08 PM Tom Lane  wrote:

> Adrian Klaver  writes:
> > On 6/4/20 9:43 AM, Tom Lane wrote:
> >> It's possible that the index had bloated to the point where the planner
> >> thought it was cheaper to use a seqscan.  Did you make a note of the
> >> cost estimates for the different plans?
>
> > I missed the part where the OP pointed to a SO question. In that
> > question where links to explain.depesz.com output.
>
> Ah, I didn't bother to chase that link either.
>
> So the cost estimates are only a fraction of a percent apart, making
> it unsurprising for not-so-large changes in the index size to cause
> a flip in the apparently-cheapest plan.  The real question then is
> why the cost estimates aren't actually modeling the real execution
> times very well; and I'd venture that that question boils down to
> why is this rowcount estimate so far off:
>
> >->  Parallel Seq Scan on oscar mike_three
> > (cost=0.000..1934568.500 rows=2385585 width=3141) (actual
> > time=159.800..158018.961 rows=23586 loops=3)
> >Filter: (four AND (NOT bravo) AND (zulu <=
> > 'echo'::timestamp without time zone))
> >Rows Removed by Filter: 8610174
>
> We're not going to be able to answer that if the OP doesn't wish
> to decloak his data a bit more ... but a reasonable guess is that
> those filter conditions are correlated.  With late-model Postgres
> you might be able to improve matters by creating extended statistics
> for this table.
>
> regards, tom lane
>


Re: When to use PARTITION BY HASH?

2020-06-05 Thread Jeff Janes
On Fri, Jun 5, 2020 at 6:12 AM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:

> On Thu, Jun 4, 2020 at 4:32 PM Jeff Janes  wrote:
>
>> On Wed, Jun 3, 2020 at 7:55 AM Oleksandr Shulgin <
>> oleksandr.shul...@zalando.de> wrote:
>>
>> With hash partitioning you are not expected, in general, to end up with a
>>> small number of partitions being accessed more heavily than the rest.  So
>>> your indexes will also not fit into memory.
>>>
>>> I have the feeling that using a hash function to distribute rows simply
>>> contradicts the basic assumption of when you would think of partitioning
>>> your table at all: that is to make sure the most active part of the table
>>> and indexes is small enough to be cached in memory.
>>>
>>
>> While hash partitioning doesn't appeal to me, I think this may be overly
>> pessimistic.  It would not be all that unusual for your customers to take
>> turns being highly active and less active.  Especially if you do occasional
>> bulk loads all with the same customer_id for any given load, for example.
>>
>
> For a bulk load you'd likely want to go with an empty partition w/o
> indexes and build them later, after loading the tuples.
>

That only works if the bulk load is starting from zero.  If you are adding
a million rows to something that already has 100 million, you would
probably spend more time rebuilding the indexes than you saved by dropping
them.  And of course to go with an empty partition, you have to be using
partitioning of some kind to start with; and then you need to be futzing
around creating/detaching and indexing and attaching.  With hash
partitioning, you might get much of the benefit with none of the futzing.


> So while you might not have a permanently hot partition, you could have
>> partitions which are hot in turn.  Of course you could get the same benefit
>> (and probably better) with list or range partitioning rather than hash, but
>> then you have to maintain those lists or ranges when you add new customers.
>>
>
> Why are LRU eviction from the shared buffers and OS disk cache not good
> enough to handle this?
>

Data density.  If the rows are spread out randomly throughout the table,
the density of currently relevant tuples per MB of RAM is much lower than
if they are in partitions which align with current relevance.  Of course
you could CLUSTER the table on what would otherwise be the partition key,
but clustered tables don't stay clustered, while partitioned ones stay
partitioned.  Also, clustering the table wouldn't help with the relevant
data density in the indexes (other than the index being clustered on, or
other ones highly correlated with that one).  This can be particularly
important for index maintenance and with HDD, as the OS disk cache is in my
experince pretty bad at deciding when to write dirty blocks which have been
handed to it, versus retain them in the hopes they will be re-dirtied soon,
or have adjacent blocks dirtied and then combined into one write.


>
> This actually applies to any partitioning scheme: the hot dataset could be
> recognized by these caching layers.  Does it not happen in practice?
>

Caching only happens at the page level, not the tuple level.  So if your
hot tuples are interspersed with cold ones, you can get poor caching
effectiveness.

Cheers,

Jeff


Re: Oracle vs. PostgreSQL - a comment

2020-06-05 Thread Achilleas Mantzios

On 5/6/20 3:33 μ.μ., Ravi Krishna wrote:

Plus PG does not directly support cross database queries using 3 part name, 
something
sqlserver excels at.

Maybe because SQL server does not have real databases but schemas instead ?
This sucks security wise.

SQLServer has real databases with its own transaction log files.  You can 
restore individual databases in a cluster.
They also have schemas which are not same as users (Oracle treats users and 
schemas same).

Ok, I never meant SQL Server does not have real databases, I meant it handles 
databases as top level schemas.


For security, there is grant connect to the DB and further filtering based on 
schema.

PostgreSQL has stronger protection at connection level, via pg_hba.conf . 
PostgreSQL also supports db-level GRANTs .
In MS SQL server if you need an additional DB for maintenance tasks or to act as an intermediate bridge (like in our case) with write access on it, then automatically you write access to the main 
schema (ok DB in MS SQL terms). (and need further REVOKES to fix security). This (security-wise) is just poor.

So the cross db joins come with a price.


I find SQLServer implementation pretty strong in this.

The only time this can be a problem is when few databases failover in a 
mirrored environment (streaming replication in PG speak).
Then suddenly 3 part names would fail if the remote DB is no longer primary. My 
experience with SQLServer is badly dated. Last
time I worked was SS2008.  I believe in later versions they solved this problem 
by the failover group concept which failovers all
inter-related databases at one go.

BTW Mysql treats databases and schemas as same (that's what it was few years 
ago)




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





Re: Oracle vs. PostgreSQL - a comment

2020-06-05 Thread Ravi Krishna
>> Plus PG does not directly support cross database queries using 3 part name, 
>> something
>> sqlserver excels at.

>Maybe because SQL server does not have real databases but schemas instead ?
>This sucks security wise.

SQLServer has real databases with its own transaction log files.  You can 
restore individual databases in a cluster.
They also have schemas which are not same as users (Oracle treats users and 
schemas same).

For security, there is grant connect to the DB and further filtering based on 
schema.

I find SQLServer implementation pretty strong in this.  

The only time this can be a problem is when few databases failover in a 
mirrored environment (streaming replication in PG speak).
Then suddenly 3 part names would fail if the remote DB is no longer primary. My 
experience with SQLServer is badly dated. Last 
time I worked was SS2008.  I believe in later versions they solved this problem 
by the failover group concept which failovers all
inter-related databases at one go.

BTW Mysql treats databases and schemas as same (that's what it was few years 
ago)



Re: Oracle vs. PostgreSQL - a comment

2020-06-05 Thread Achilleas Mantzios

On 2/6/20 10:45 μ.μ., Ravi Krishna wrote:

Generally speaking, I discourage having lots of databases under one PG
cluster for exactly these kinds of reasons.  PG's individual clusters
are relatively lightweight, after all.


Plus PG does not directly support cross database queries using 3 part name, 
something
sqlserver excels at.

Maybe because SQL server does not have real databases but schemas instead ?
This sucks security wise.








--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





Re: Potential optimisation for the creation of a partial index condition over a newly created nullable, non-default-valued column?

2020-06-05 Thread David Rowley
On Thu, 4 Jun 2020 at 17:59, Tim Dawborn  wrote:
> tmp=> \timing on
> Timing is on.
> tmp=> BEGIN;
> BEGIN
> Time: 1.333 ms
> tmp=> ALTER TABLE foo ADD COLUMN d integer NULL;
> ALTER TABLE
> Time: 1.581 ms
> tmp=> CREATE UNIQUE INDEX myindex ON foo (a, b, c) where d = 2;
> CREATE INDEX
> Time: 37758.880 ms (00:37.759)
> tmp=> COMMIT;
> COMMIT
> Time: 3.922 ms
>
> Given that d = 2 could not ever be true as the nullable, non-default-valued 
> column was just added inside the same transaction, I was hoping that the 
> index creation would be instantaneous, as it realised there's no rows that 
> this condition could be true for.

While it could be technically possible to do something like check the
xmin of the pg_attribute record for all columns mentioned in the
index's predicate all are set to the current transaction ID and the
index predicate refutes an expression containing those columns with IS
NULL clauses or whatever the DEFAULT expression value is, we've just
no way to know if any rows were inserted or updated between the ALTER
TABLE and the CREATE INDEX. Certainly, no other transaction could have
done anything since we added the column due to us holding the
AccessExclusiveLock.  We just don't really have a way to know if our
own transaction did anything in between. So to do this, we'd need to
invent something to track that, and that something wouldn't be free to
maintain and we'd likely need to maintain it all the time since we'd
be unable to predict what future commands might need to use that
information.

It would likely be easier to go about it by having some sort of ALTER
TABLE ADD INDEX command, then just chain the alter tables together. We
have various other optimisations when multiple subcommands are used in
a single ALTER TABLE.  However, I'm not suggesting we go and allow
indexes to be created in ALTER TABLE. I"m just suggesting that it
would likely be a better alternative than inventing something to track
when a table last had

David




Re: Multitenent architecture

2020-06-05 Thread Ravi Krishna
> 
> If the data size is more than 6TB, which approach better?


Do you require cross tenants queries?  If yes, then schemas are a better 
solution.



Re: When to use PARTITION BY HASH?

2020-06-05 Thread Oleksandr Shulgin
On Thu, Jun 4, 2020 at 4:32 PM Jeff Janes  wrote:

> On Wed, Jun 3, 2020 at 7:55 AM Oleksandr Shulgin <
> oleksandr.shul...@zalando.de> wrote:
>
> With hash partitioning you are not expected, in general, to end up with a
>> small number of partitions being accessed more heavily than the rest.  So
>> your indexes will also not fit into memory.
>>
>> I have the feeling that using a hash function to distribute rows simply
>> contradicts the basic assumption of when you would think of partitioning
>> your table at all: that is to make sure the most active part of the table
>> and indexes is small enough to be cached in memory.
>>
>
> While hash partitioning doesn't appeal to me, I think this may be overly
> pessimistic.  It would not be all that unusual for your customers to take
> turns being highly active and less active.  Especially if you do occasional
> bulk loads all with the same customer_id for any given load, for example.
>

For a bulk load you'd likely want to go with an empty partition w/o indexes
and build them later, after loading the tuples.  While it might not be
possible with any given partitioning scheme either, using hash partitioning
most certainly precludes that.


> So while you might not have a permanently hot partition, you could have
> partitions which are hot in turn.  Of course you could get the same benefit
> (and probably better) with list or range partitioning rather than hash, but
> then you have to maintain those lists or ranges when you add new customers.
>

Why are LRU eviction from the shared buffers and OS disk cache not good
enough to handle this?

This actually applies to any partitioning scheme: the hot dataset could be
recognized by these caching layers.  Does it not happen in practice?

--
Alex


Re: Multitenent architecture

2020-06-05 Thread Rob Sargent


> On Jun 5, 2020, at 2:54 AM, Vasu Madhineni  wrote:
> 
> 
> If the data size is more than 6TB, which approach better?
> 
>> On Fri, Jun 5, 2020 at 2:57 PM Laurenz Albe  wrote:
>> On Thu, 2020-06-04 at 23:52 +0800, Vasu Madhineni wrote:
>> > We are planning a POC on multitenant architecture in Postgres, Could you 
>> > please
>> > help us with steps for multitenant using schema for each application model.
>> 
>> For few tenants, you can keep identical tables in several schemas and
>> set "search_path" to select a tenant.
>> 
>> With many tenants, you are better off with one table that holds the
>> data for all clients.  You can use Row Level Security to have each
>> tenant see only his or her data, and it might be a good idea to
>> use list partitioning on the tenant ID.
>> 
>> Yours,
>> Laurenz Albe
>> -- 
>> Cybertec | https://www.cybertec-postgresql.com
>> 
The question is How many separate data owners?

Re: Multitenent architecture

2020-06-05 Thread Vasu Madhineni
If the data size is more than 6TB, which approach better?

On Fri, Jun 5, 2020 at 2:57 PM Laurenz Albe 
wrote:

> On Thu, 2020-06-04 at 23:52 +0800, Vasu Madhineni wrote:
> > We are planning a POC on multitenant architecture in Postgres, Could you
> please
> > help us with steps for multitenant using schema for each application
> model.
>
> For few tenants, you can keep identical tables in several schemas and
> set "search_path" to select a tenant.
>
> With many tenants, you are better off with one table that holds the
> data for all clients.  You can use Row Level Security to have each
> tenant see only his or her data, and it might be a good idea to
> use list partitioning on the tenant ID.
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>
>


Re: Possible improvement

2020-06-05 Thread David Rowley
On Fri, 5 Jun 2020 at 14:41, Paul van der Linden
 wrote:
> If I have a query like:
>
> SELECT * FROM (
> SELECT
>   CASE
>   WHEN field='value1' THEN 1
>   WHEN field='value2' THEN 2
>   END AS category
> FROM table1
> ) AS foo
> WHERE category=1
>
> doesn't use the index on field, while technically it could do that.
> Is it hard to implement drilling down the constant in the WHERE to within the 
> CASE?

It doesn't look impossible to improve that particular case.  See
eval_const_expressions_mutator() in clauses.c at T_CaseExpr. However,
this would need to take constant folding further than we take it
today. Today we just have the ability to simplify expressions which
are, by themselves, an expression which will always evaluate to a
constant value. This case is more complex as it requires something
outside the CASE expr to allow the simplification to take place. In
this case, we'd need to look at the other side of the OpExpr to see
the const there before any transformation could simplify it.  It's
also not entirely clear that the simplification would always be a good
idea.  What, for example if there was an index on the case statement
but none on "field". The query may perform worse!  The unfortunate
part about this is that, generally, when we perform constant folding,
we don't yet have an idea about which indexes exist.  I imagine the
only sane way to do it would be to allow expressions to have some sort
of "alternative" expression that could be matched up to the index
column instead. It wouldn't be a trivial piece of work to do that.

For the more simple cases, you can see from looking at:

postgres=# explain select * from pg_class where oid = (case when
'test' = 'test' then 1 else 0 end);
 QUERY PLAN
-
 Index Scan using pg_class_oid_index on pg_class  (cost=0.27..8.29
rows=1 width=260)
   Index Cond: (oid = '1'::oid)
(2 rows)

that we do simplify case statements which are by themselves constant.

> Is this something that could be put on some wishlist? If so where are the 
> most looked at ones?

There is a todo list of sorts in [1]. However, I'm really unsure if
anyone ever looks at it for something to do. Mostly, people have their
own ideas and problems to solve and spend their free cycles hacking
away at those. You might have equal luck waiting until December and
writing it on a piece of paper and setting it on fire. Likely there
would be more chance if it was something simple as a novice who's
looking into getting into working on Postgres might skim that list for
something to work on.  More experienced people, I imagine, would never
look there.  FWIW, many people who are now working on PostgreSQL once
came along with a question or idea like yours. Many have been unable
to escape ever since :)

David

[1] https://wiki.postgresql.org/wiki/Todo




Re: Multitenent architecture

2020-06-05 Thread Laurenz Albe
On Thu, 2020-06-04 at 23:52 +0800, Vasu Madhineni wrote:
> We are planning a POC on multitenant architecture in Postgres, Could you 
> please
> help us with steps for multitenant using schema for each application model.

For few tenants, you can keep identical tables in several schemas and
set "search_path" to select a tenant.

With many tenants, you are better off with one table that holds the
data for all clients.  You can use Row Level Security to have each
tenant see only his or her data, and it might be a good idea to
use list partitioning on the tenant ID.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Multitenent architecture

2020-06-05 Thread Paul Förster
Hi Vasu,

> On 04. Jun, 2020, at 17:52, Vasu Madhineni  wrote:
> We are planning a POC on multitenant architecture in Postgres, Could you 
> please help us with steps for multitenant using schema for each application 
> model.

look up the docs for "create database" and "create schema":

https://www.postgresql.org/docs/current/sql-createdatabase.html
https://www.postgresql.org/docs/current/sql-createschema.html

PostgreSQL is by definition, what Oracle calls "multi tenant".

Cheers,
Paul





Re: PostgreSQL 11 with SSL on Linux

2020-06-05 Thread Paul Förster
Hi Susan,

you need to install the openssl rpm if you don't have that already. But it 
should be a standard package in all Linux distributions, so it should be in 
your repository.

Exmaple from my host:
$ rpm -qa openssl
openssl-1.1.0i-lp151.1.1.noarch

Cheers,
Paul

> On 04. Jun, 2020, at 18:50, Susan Joseph  wrote:
> 
> So the other issue I have is that I am running this on a server that is not 
> connected to the Internet.  So I have downloaded the RPM file but I can't 
> figure out how to install it without it trying to access files on the 
> Internet.  Are there other libraries I need to include in my download for 
> this to work on a non-networked server?
> 
> 
> 
> -Original Message-
> From: Magnus Hagander 
> To: Susan Joseph 
> Cc: pgsql-general@lists.postgresql.org 
> Sent: Thu, Jun 4, 2020 11:14 am
> Subject: Re: PostgreSQL 11 with SSL on Linux
> 
> 
> 
> On Thu, Jun 4, 2020 at 5:01 PM Susan Joseph  wrote:
> Can someone tell me if there is a yum version of PostgreSQL 11 that can be 
> installed on Linux that has SSL enabled?  Currently the only way I have 
> gotten SSL turned on in PostgreSQL is by doing the following commands:
> 
>   • tar xzvf /tmp/postgresql-11.2.tar.gz -C /data
>   • mv /data/postgresql-11.2 /data/pgsql
>   • cd /data/pgsql
>   • ./configure --prefix=/data/pgsql --without-readline --without-zlib 
> --with-openssl >> conf.log
>   • make
>   • make install
> 
> All the PostgreSQL versions available om yum from the postgresql.org site 
> have SSL enabled. Just install using the instructions from 
> https://www.postgresql.org/download/.
> 
> -- 
>  Magnus Hagander
>  Me: https://www.hagander.net/
>  Work: https://www.redpill-linpro.com/