Re: Was my question inappropriate for postgres?

2022-07-24 Thread Gavin Flower

On 25/07/22 12:31, Mladen Gogala wrote:

On 7/24/22 20:15, Gavin Flower wrote:
I suspect that most people do not know what 'M365' is, would be good 
to explain.  Am curious as to what 'M365' is! 


M365 is the latest member in the ArmaLite M-16 family of products. It 
can be connected to database to track down the DBA who don't check 
their backups.



[...]








Re: Was my question inappropriate for postgres?

2022-07-24 Thread Gavin Flower

On 25/07/22 12:19, Adrian Klaver wrote:

On 7/24/22 17:15, Gavin Flower wrote:

On 25/07/22 11:56, Taka Taka wrote:

Hello.


I suspect that most people do not know what 'M365' is, would be good 
to explain.  Am curious as to what 'M365' is!


New branding and pricing(subscription) of MS Office.

Microsoft 365
https://www.microsoft.com/en-us/microsoft-365


[...]

Thanks,  I use LibreOffice which is free and easier to use -- it runs on 
many O/S's including those from Micosoft.

https://www.libreoffice.org/





Re: Was my question inappropriate for postgres?

2022-07-24 Thread Gavin Flower

On 25/07/22 11:56, Taka Taka wrote:

Hello.

I posted question bellow.

Hello.
I would like to know if psqlodbc_13_02 is compatible with M365.
Also, could you please tell me which of the psqlodbc_13_02 would be
suitable to Windows 10 64-bit?
Was my question inappropriate?
I wanted to know if the pdbc driver above would be compatible to 
Microsoft's M365.

Also, I find variety in the odbc driver.
Is it possible to know which of the driver is suitable to Windows 10 
64-bit?

Or could you please tell me where to inquire?


Hi,

Please do not top post, bottom post like me.

I suspect that most people do not know what 'M365' is, would be good to 
explain.  Am curious as to what 'M365' is!


Can't help with Microsoft stuff, as I mainly use Linux, though I've been 
forced to use Microsoft O/S's in the past (last time was in a VM on 
Linux a few years back).


If you are using Postgres for really large data volumes, and/or require 
high performance, you might be better to upgrade to Linux --- it really 
depends on your own specific situation.  Note that most servers run Linux!



Cheers,
Gavin





Re: a very naive question about table names in Postgres

2021-12-31 Thread Gavin Flower

On 1/01/22 09:27, Martin Mueller wrote:


I have a very naïve question and don’t know where tp  look for the 
answer.  I use Postgres via AquaData Studio as a giant spreadsheet.  I 
create tables and give them names. I see the table names  and know how 
to manipulate them with simple SQL queries.


I know that on my Mac the tables are  kept in the  data directory 
/Users/martinmueller/Library/Application Support/Postgres/var-13.  If 
I go there I see that internally the tables have numbers.  Somewhere 
in the system there must be a table that maps the name I gave to table 
X  to the numerical  inside the database.


Where is that mapping and how can I query it?  On what page of the 
documentation do I find the answer?


I much prefer Postgres to Mysql for  a variety of reasons, but mostly 
for its elegant string functions. But in Mysql it seems to be much 
easier to keep track of tables.


Martin Mueller

Professor emeritus of English and Classics

Northwestern University


Hi Martin.

Don't know why you want the information you appear to be asking!

However, if you are up to learning how to use psql from the command 
line, then you have an extremely powerful tool to use to query and 
manipulate tables in PostgreSQL!  I set up SQL in text files and get 
psql to execute them (\i my_query.sql) -- you can execute SQL directly 
in psql, but that is really only good for very simply queries.  I find 
that using psql & a text editor a lot easier than using a GUI based tool.



Cheers,
Gavin






Re: [EXTERNAL] Re: Inserts and bad performance

2021-11-28 Thread Gavin Flower

On 28/11/21 17:17, Godfrin, Philippe E wrote:


Right you are sir! I figured that out a few hours ago!

pg

*From:* Ron 
*Sent:* Wednesday, November 24, 2021 10:58 PM
*To:* pgsql-general@lists.postgresql.org
*Subject:* [EXTERNAL] Re: Inserts and bad performance

On 11/24/21 1:15 PM, Godfrin, Philippe E wrote:

[snip]

I dropped the unique index , rebuilt the other indexes and no change.


IMNSHO, this is the worst possible approach.  Drop everything *except* 
the unique index, and then (if possible) sort the input file by the 
unique key.   That'll increase buffered IO; otherwise, you're bopping 
all around the filesystem.


Using a bulk loader if possible would increase speeds

--
Angular momentum makes the world go 'round.


Please don't top post!


Cheers,
Gavin





Re: "two time periods with only an endpoint in common do not overlap" ???

2021-10-16 Thread Gavin Flower

On 16/10/21 18:41, David G. Johnston wrote:

On Friday, October 15, 2021, Ron  wrote:


Prima facie, if you were told "numbers in the range 0-10", would
you really think, "ah, they *really* mean 0 through 9"?


I would indeed default to both endpoints of the range being 
inclusive.  I also begin counting at one, not zero.  I’ve long gotten 
past being surprised when computer science and my defaults don’t 
agree.  Choices are made and documented and that works for me.


As for this, documentation I never really gave the wording a second 
thought before, though I can definitely understand the complaint and 
like the somewhat wordier, but less linguistically challenging, 
phrasing the OP suggested (Boundary point, especially by itself, is 
not an improvement).


David J.


The reason arrays generally start at zero and not one, is efficiency.

When indexes are zero based then the displacement in bytes from the 
start address of x[n] is simply:

    startAddress + n * sizeOfElement

If the start of an array had the index of one, then you have subtract 
one each time, so the displacement from the start address of x[n] now 
becomes

    startAddress + (n - 1) * sizeOfElement


Half open intervals make life a lot simpler so it is the natural 
default, to prevent intervals from having any numbers in common.


If you have 3 intervals spanning the range [0, 30), and you are only 
dealing with integers then you can split the range as:

[0, 9]       0 <= x <= 9
[10, 19]  10 <= x <= 19
[20, 29]  10 <= x <= 29

But what if you are dealing with floats? The above arrangement would not 
work, as 9.78 would not be in any interval, so you need half open 
intervals, such as:

[0, 10)      0 <= x < 10
[10, 20)  10 <= x < 20
[20, 30)      10 <= x < 30
So you know what number each interval starts at, and every number in the 
range is covered.



-Gavin







Re: JOB | DBA (Canada)

2021-10-14 Thread Gavin Flower

On 15/10/21 02:00, James Tobin wrote:

Hello, we are working with an employer that is looking to hire someone
capable of managing Mongo and Sybase databases at their office in
Canada.  Consequently, I had hoped that some members of this list may
like to discuss further.  Kind regards, James


If they wanted people from this list the they should upgrade to 
PostgreSQL, they'll find PostgreSQL faster and more reliable than 
Mongo.  I suspect  the same might true for Sybase, but I've never done a 
detailed comparison.



Cheers,
Gavin





Re: Timestamp with vs without time zone.

2021-09-22 Thread Gavin Flower

On 22/09/21 20:11, Tim Uckun wrote:

I'll add another layer of complication.

You have a database server hosted in Australia, and that's also where
your web server and api server is. You have customers all over the
world though so you set up additional API servers in Europe, USA,
Japan etc.

A korean user will fetch you single page app as static HTML from S3
with cloudfront. It will hit your japanese API server,  which will
fetch the data from your japanese read only replica with the master
being in Australia.

The master DB  writes the records has to know your end user is in
Korea somehow so you have to carry that time zone all the way across
those tiers.

To me the ideal solution would be to have a compound object which has
the time zone in it. This object gets passed through the tiers and end
up at the database where it's stored.


[...]

Please don't top post!  Bottom post like I am.

I imagine the compound object, call it timestamporigin (timestampo?)) 
would have 2 components:


1. GMT time
2. Offset hours & minutes
3. client time zone code

This was means:

1. you could calculate accurate time differences even between 2
   timestampo's regardless of which time zone they came from
2. you know the correct local time it was made
3. you know the time zone it came from
4. it would still be correct even if the machine's, or the server's,
   time zone file was updated and the changes applied retroactively.


Cheers,
Gavin





Re: Question about behavior of conditional indexes

2021-09-21 Thread Gavin Flower

On 21/09/21 22:28, Koen De Groote wrote:

Greetings all,

Working on postgres 11.

I'm researching an index growing in size and never shrinking, and not 
being used anymore after a while.


The index looks like this:

|"index002" btree (action_performed, should_still_perform_action, 
action_performed_at DESC) WHERE should_still_perform_action = false 
AND action_performed = true |
So, there are 2 boolean fields, and a conditional clause for both. The 
table has about 50M rows, the index barely ever goes over 100K matched 
rows.


The idea is to search for rows matching these conditions quickly, and 
then update them. This happens daily.


This means the condition no longer match the index. At this point, 
does the row get removed from the index? Or does it stay in there as a 
dead row?


I'm noticing index bloat on this index and am wondering if all these 
updated rows just stay in the index?


The index currently sits at 330MB. If I drop and re-create it, it's 1.5MB.

A cronjob runs a vacuum once per day, I can see the amount of dead 
rows dropping in monitoring software.


But should this also take care of indexes? In postgres 11, you can't 
reindex concurrently, so I was wondering if indexes are skipped by 
vacuum? Or only in case of conditional indexes?




So I'm wondering if the behavior is as I described.

Regards,
Koen De Groote


Can you upgrade to pg13?  A lot of work was done on indexes in pg12 & 
13.  So possibly your problem may have been resolved, at least in part.


Note that pg 14 is due out this month.


Cheers,
Gavin





Re: The tragedy of SQL

2021-09-17 Thread Gavin Flower

On 17/09/21 23:49, Raymond Brinzer wrote:

On Tue, Sep 14, 2021 at 9:06 AM Merlin Moncure  wrote:

I've long thought that there is more algebraic type syntax sitting
underneath SQL yearning to get out.

I wanted to come back to this, because I've been looking to take a
single problem (from my perspective) and explain it concisely.  Your
intuition is right on the mark.

Shell syntax is a pretty good lingua franca, so let's use it.  Say you
were working at the command line, and you said something like:

cat somefile | awk '{print $3 " " $1 " " $5;}' | sort | grep "^Robert"

And the shell responded with something like:  ERROR: syntax error at
or near "sort".  After a little tinkering, you discover:  that's
because the grep has to come before the sort.  But why?

The database is not going to evaluate relational operations in order,
passing the output of one into the next as a shell pipe does.
Nevertheless, they are logically independent.  Each should take in a
relation and either a second relation or a predicate, and return a
relation.  Or, to put it mathily, relations are closed under
relational operations.  So:

Operation 1 | Operation 2
and
Operation 2 | Operation 1

should both be valid, whether or not they're semantically equivalent
(though they often are).  The operations are inherently atomic, and
can be understood in isolation.


[...]

In Mathematics which way round you do things may be important. For 
numbers in the Real & Complex domains then this does not matter.  
However, in the Quaternions it does matter, here A * B is not always the 
same as B * A.  And amongst the Octonions it is even worse, as there the 
order in which you do things may lead to different results, so A * (B * 
C) is not necessarily the same as (A * B) * C.


Another example is rotating things in 3 dimensions.  Hold a book with 
its front facing you.  Rotate the book towards you so it is now flat, 
them rotate the book along the vertical access so it is now edge on.  
When you do the operations in the reverse order, then you get a 
different result! Yes, you can blame the Quaternions.


In PostgreSQL, if the operations are 'not idempotent' (relies on at 
least one function that has varying output for the same input 
parameters) then the order in which you do things could lead to 
different results.


For the optimizer to be effective then it must be allowed to do 
operations in the best order it sees fit -- this is documented. Just as 
you must not rely on the order in which results are returned, unless you 
explicitly have an ORDER BY -- as the system will extract results in the 
fastest way it knows, which may not necessarily be in the same order as 
the values where inserted. This would be true, even if you had a totally 
different query language.



Cheers,
Gavin






Re: The tragedy of SQL

2021-09-16 Thread Gavin Flower

On 17/09/21 11:29, Guyren Howe wrote:
[...]
The set theory is the _point_. SQL is a gargantuan distraction from 
using it efficiently.


Imagine if COBOL was the only widely-available programming language 
with functions. You might use it, because functions are really great 
abstraction for programming. That wouldn’t mean that COBOL wasn’t an 
utterly awful language.


SQL is like that, only for relations, sets and logic.



COBOL is horrible, but SQL is quite a neat language, though it has its 
issues.


Am happy to write SQL, but not that keen on getting back into COBOL!

SQL is far superior to COBOL, in their respective use cases.


Cheers,
Gavin





Re: Alter and move corresponding: was The tragedy of SQL

2021-09-16 Thread Gavin Flower

On 17/09/21 11:22, Rob Sargent wrote:

As far as alter, in 1981, before I became a programmer, I asked my

Cobol Programmer friend if there was anything you could put in a
program that would get you fired. He said yes, the alter statement :-).
In my 3 semesters of Cobol, I never once used the Alter statement.


[...]

I was very proud of using an ALTER GOTO in my first program in my 8 
week COBOL training course.  Now I cringe every time I think about it!


Before the course, I was fluent in FORTRAN IV and had programmed in 
ALGOL-68.  So I was an experienced programmer.



In what I see now as truly prescient of the early 80's U of BC CS 
department, algol-68, snobol, cobol, B, APL and half a dozen others 
I've complete forgoten each got a week with the same Towers of Hanoi 
task. I can only assume the powers saw the impending dump-heap 
relegation of the lot of them. The counter argument was that the major 
project was done PL/1.




I've seen some documentation for all those languages except B, but I 
never programmed in any of: SNOBOL, B, nor APL.  I was sent on a 3 day 
PL/1 course, but never used it 'in anger'.


I once tried to discuss general programming ideas with a colleague, and 
started talking about a section of code.  They then proceeded to tell me 
the definition of SECTION in COBOL in detail, I didn't bother to tell 
them that I knew 3 things about SECTIONs in COBOL that they hadn't 
mentioned.  I gave up the discussion, saw no point in continuing, and I 
didn't want to appear patronising.


Until one has programmed in several languages, preferably of different 
types, it is difficult to separate out programming concepts from the 
philosophy and implementation of individual languages.


I think any Master Programmer should have fluency in at least 3 
languages and familiarity with at least 3 more.  So I think at least 3 
languages should be taught to CS students.  And of course the importance 
of how to RTFM effectively!


Once I helped a colleague debug a Smalltalk example based on a magazine 
article I'd read 2 years earlier.  I'm certain they implicitly assumed 
that I was for more experienced in Smalltalk than I was!  Perhaps I 
should ask her, but she's probably forgotten -- that was about 30 years 
ago.  Now I've been married to her for 25 years,


Another time I helped someone develop a screen saver in Visual Basic.  
Though it was many years later before I realized the name of the 
language.  I'd never seen VB before.  I even explained the OnEvent 
concept even though it was totally new to me, but I still successfully 
helped them to implement at least one such construct in the screen 
saver.  Having experience with a wide variety of different programming 
languages was a huge advantage.



Cheers,
Gavin





Re: Alter and move corresponding: was The tragedy of SQL

2021-09-16 Thread Gavin Flower

On 16/09/21 04:52, Steve Litt wrote:

Gavin Flower said on Wed, 15 Sep 2021 13:49:39 +1200


Hi Michael,

[snip]


COBOL has strange verbs like 'move corresponding' that could
accomplish complicated tasks in a few lines but you have to be
careful that you knew what you were asking for!

In our site that was banned as being too dangerous.

And how about the 'lovely' ALTER GOTO construct???

Children don't try to use these constructs at home, as even
experienced adults get burnt using them!

I never Cobolled professionally, but took 3 semesters of Cobol and
Santa Monica Community College in Santa Monica, California USA. They
taught us move corresponding, I used it, it was handy. I'd use it again
if I were a Cobol professional.

As far as alter, in 1981, before I became a programmer, I asked my
Cobol Programmer friend if there was anything you could put in a
program that would get you fired. He said yes, the alter statement :-).
In my 3 semesters of Cobol, I never once used the Alter statement.


[...]

I was very proud of using an ALTER GOTO in my first program in my 8 week 
COBOL training course.  Now I cringe every time I think about it!


Before the course, I was fluent in FORTRAN IV and had programmed in 
ALGOL-68.  So I was an experienced programmer.



Cheers,
Gavin



Cheers,
Gavin





Re: The tragedy of SQL

2021-09-16 Thread Gavin Flower

On 17/09/21 04:26, Michael Nolan wrote:
In the same 1971 seminar where we studied Algol-68, we had to read and 
write a short paper on the 1970 Codd paper on relational  theory, 
which had only been out for about a year.  The professor running the 
seminar noted that Codd proved that the relational model worked, but 
didn't guarantee that it worked efficiently enough to be implementable 
without immense computer resources, which of course became available.

--
Mike Nolan



Developing an effective optimizer might have helped ...


Cheers,
Gavin






Re: Alter and move corresponding: was The tragedy of SQL

2021-09-15 Thread Gavin Flower

On 16/09/21 05:47, Michael Nolan wrote:
When I was working at the help desk at the computer center as an 
undergrad, the professor in charge of that group used to give us 
interesting little language tests for things we needed to watch out 
for, especially with beginning programmers.


One of his favorite ploys was to use the EQUIVALENCE function in 
FORTRAN to equivalence a constant with a variable, then assign 
something to that variable.  In one of the FORTRAN compilers, that 
would result in overwriting the constant, so all future uses of it 
would have the new value.  This would break many things, of course.

--
Mike Nolan



On the IBM 1130 we were warned not to assign a value to a number, like

    3 = 7

if we did then apparently

    x = 6 * 3

would assign the value of 42 to x.

Never tried it, I now wish I had!


Cheers,
Gavin





Re: The tragedy of SQL

2021-09-14 Thread Gavin Flower

Hi Michael,

Appropriate comments interspersed below.

I'm happy writing SQL and moderately competent using it.  But like all 
the languages I've used, without exception, it has its pain points.



Cheers,
Gavin

On 15/09/21 11:25, Michael Nolan wrote:
Of all the languages I wrote in, I think SNOBOL was the most fun to 
write in, and LISP the least fun.  Control Data


I once read the first 40 pages of a SNOBOL manuel, but unfortunately 
never got the opportunity to try it out.




assembler language
programming was probably the most precise, because you could crash the 
OS with a single mis-placed character, something I did more than once.


I knew a senior tech programmer who inadvertently tried to rewind a disc 
back to BT and switch to 800 BPI, fortunately only his program crashed.


Another time the memory protection was disabled (ICL no longer had the 
capacity to fix it) on our ICL 4/50 MainFrame and someone wrote a 
program to write 'J' into an area of memory.  That was legitimate, but 
it had a bug which caused it to write into more memory than it should 
have...  The machine crashed.  Our ICL 4/50 was the last surviving 
operational machine of its type in the world.


Our main machines were two ICL 4/72's each having a single fast 2MHz 
processor and a massive 1 MB of core memory with a battery of big 
exchangeable disks each with a whopping 60 MB of capacity & tape drives 
for 12" reels.





In a graduate-level course, we studied ALGOL-68, which had so many 
things in it that I'm not sure anybody ever actually implemented the 
full language.  (But then again, has anybody implemented EVERYTHING in 
the SQL standard?)


I learnt ALGOL-68 from a manual written in Backus-Naur notation on my 
own initiative.  Tried to write a simple statistics program, never 
finished it.  That was before I really understood the value of rigorous 
indenting standards.





COBOL has strange verbs like 'move corresponding' that could 
accomplish complicated tasks in a few lines but you have to be careful 
that you knew what you were asking for!


In our site that was banned as being too dangerous.

And how about the 'lovely' ALTER GOTO construct???

Children don't try to use these constructs at home, as even experienced 
adults get burnt using them!





And I'd take the SQL standard over the CODASYL standard any time!
--


Agreed!



Mike Nolan







Re: The tragedy of SQL

2021-09-14 Thread Gavin Flower

On 15/09/21 10:30, Peter J. Holzer wrote:

On 2021-09-14 16:51:39 -0400, Mladen Gogala wrote:

As software engineers, we are very much opposed to poetry, especially
those among us using Perl.

When I learned Perl, Perl poetry was a real thing!

 hp


Perl is too verbose, use APL!  See: https://tryapl.org/

To be honest, I've looked at APL but never programmed in it.


Cheers,
Gavin





Re: The tragedy of SQL

2021-09-14 Thread Gavin Flower

On 15/09/21 04:10, Michael Nolan wrote:
I started programming in 1967, and over the last 50+ years I've 
programmed in more languages than I would want to list.  I spent a 
decade writing in FORTRAN on a GA 18/30 (essentially a clone of the 
IBM 1130) with limited memory space, so you had to write EFFICIENT 
code, something that is a bit of a lost art these days.  I also spent 
a decade writing in COBOL.


I've not found many tasks that I couldn't find a way to write in 
whatever language I had available to write it in. There may be bad (or 
at least inefficient) languages, but there are lots of bad programmers.

--
Mike Nolan
htf...@gmail.com


I remember programming in FORTRAN IV on an IBM 1130 at Auckland 
University.  My first attempt to explore Pythagorean triples was written 
in FORTRAN on that machine.  Finally had a useful program written in 
Java about 30 years later.  There are 4 triples starting with 60 that 
satisfy A*2 + B^2 + C^2 where A < B < C and the numbers are mutually 
prime. I was able to handle values of A up to the size of long, so I got 
some pretty big numbers for B & C.  Java's BigInteger class has its uses!


On the IBM 1130 it was faster to use X * X to find the square of a value 
than to use the power notation (of which I've forgotten the syntax).


And for my many sins, I spent years programming in COBOL.

I've written code in over 30 languages.  Probably had most fun writing a 
couple of trivial programs in ARM2/3 assembler -- all instructions 
except one are conditional.


There is no one perfect language, despite what some people might insist!


Cheers,
Gavin






Re: Series of 10 questions about the use of postgresql, generally.

2021-08-06 Thread Gavin Flower

On 6/08/21 4:45 pm, A Z wrote:
I have been going through the free online book LEARNING postgresql 
book, that has been compiled by Stack Overflow contributors. I have 
gotten to the point where I have the following series of unanswered 
questions:

[...]


 9. What does the VARYING keyword do to an applicable type in a create
table statement? CHARACTER VARYING becomes entirely equivalent to
VARCHAR. Static, limited types become more dynamic and are unlimited.


I would suggest that you consider using the type 'text' instead.

see:  https://www.postgresql.org/docs/13/datatype-character.html


[...]






Re: Why is Postgres only using 8 cores for partitioned count? [Parallel Append]

2021-02-14 Thread Gavin Flower

On 14/02/2021 22:47, David Rowley wrote:

On Sun, 14 Feb 2021 at 13:15, Seamus Abshere
 wrote:

The comment from Robert says: (src/backend/optimizer/path/allpaths.c)

 /*
  * If the use of parallel append is permitted, always request 
at least
  * log2(# of children) workers.

In my case, every partition takes 1 second to scan, I have 64 cores, I have 64 
partitions, and the wall time is 8 seconds with 8 workers.

I assume that if it it planned significantly more workers (16? 32? even 64?), 
it would get significantly faster (even accounting for transaction cost). So 
why doesn't it ask for more? Note that I've set max_parallel_workers=512, etc. 
(postgresql.conf in my first message).

There's perhaps an argument for allowing ALTER TABLE  SET (parallel_workers=N); to be set on partitioned tables, but
we don't currently allow it.

[...]

David


Just wondering why there is a hard coded limit.

While I agree it might be good to be able specify the number of workers, 
sure it would be possible to derive a suitable default based on the 
number of effective processors available?



Cheers,
Gavin





Re: Can I get some PostgreSQL developer feedback on these five general issues I have with PostgreSQL and its ecosystem?

2020-09-24 Thread Gavin Flower

On 24/09/2020 18:13, Tony Shelver wrote:



On 9/23/20 11:51 AM, tutilu...@tutanota.com
 wrote:

>     Huh? A schema is just a name space, why does it matter how the
>     extension chooses to define it? I mean you could have number of
>     permutations of postgis.
>
> I'm saying that PostGIS has a bug due to incorrectly constructed
> internal queries which makes it impossible to properly name the schema
> where PostGIS is to reside, causing my database to look very ugly when
> it has to say "postgis" instead of "PostGIS" for PostGIS's schema. And
> that was an example of how sloppy/bad third-party things always are, and
> is one reason why I don't like it when I have to rely on "extensions".


All lowercase is good, as you don't have to remember which bits are 
capitalized.  And besides, there are far more important issues to 
consider as Tony has covered in detail.  It is definitely not a bug!


[..]

Oracle's equivalent is probably the closest.  Pity that installing 
Oracle and their products as a whole is a nightmare, and rather 
wallet-draining...


Have a look at GIS / Mapping projects around the world, a majority are 
implemented on PostGIS.  Openstreetmap is probably the biggest (think 
open source version of Google Maps), and it moved to PostGIS from 
MySQL several years ago.
We did a lot of research into PostGIS, as GIS / tracking is a core 
part of our business.

We didn't find a product that could compare on

  * Maturity
  * Functionality
  * Performance
  * Cost
  * Documentation
  * Support (huge community)


I remember going to free seminars promoting the Oracle database over 25 
years ago, and the only thing I can remembers now is the pie charts 
saying how much revenue Oracle had versus all the others.  Never  how 
many transactions, size of databases, number of users, nor any really 
useful metric -- just how much Oracle was being paid!


To be honest, I've heard many bad things about Oracle, and rarely 
anything good.


There are many reasons for going with PostgreSQL and PostGIS, but you 
are free to use something else if you prefer.



Cheers,
Gavin






Re: Issues of slow running queries when dealing with Big Data

2020-07-28 Thread Gavin Flower

On 29/07/2020 03:51, Shaozhong SHI wrote:

Hi,

Please find the result of select name, setting, source from 
pg_settings where source <> 'default';


Regards,

Shao



On Tue, 28 Jul 2020 at 16:42, Michael Lewis > wrote:


On Tue, Jul 28, 2020 at 7:59 AM Shaozhong SHI
mailto:shishaozh...@gmail.com>> wrote:

Hi, Gavin,

PostgreSQL Version - 10.11. PostGIS Version - 2.4.7. Operating
System - Red Hat Enterprise Linux 7.7 .

That is all I know at the moment.

As I understand, our IT staff is building another one to sit
on Azure.


Please don't top-post in these mailing lists.


Shaozhong, you're top posting again!

Although, sometimes it is appropriate to intersperse your comments in 
the previos email, like I this





Can you share the results of the below?

select name, setting, source from pg_settings where source <>
'default';

Basically you're being asked to put your reply after the rest of the 
email, like I am doing here!




Cheers,
Gavin





Re: Issues of slow running queries when dealing with Big Data

2020-07-28 Thread Gavin Flower

On 28/07/2020 22:54, Shaozhong SHI wrote:
It has been found that issues occur when Big Data is being handled 
with PostGIS. Typically, queries can be killed by the system or memory 
is out.  Often, queries can be very slow.  Sometimes, it will take 
days or weeks to complete.


What are the best approaches and means for improving the performance 
of queries and processes in PostgreSQL/PostGIS?


Can anyone shed light on this?

Regards,

Shao


Probably helps if you can give us more details!

Such as O/S, versions of PostgreSQL/PostGIS, and hardware used. Plus 
anything else you think might be relevant.



Cheers,
Gavin





Re: Inherited an 18TB DB & need to backup

2020-05-16 Thread Gavin Flower

On 17/05/2020 08:12, Ron wrote:

On 5/16/20 7:18 AM, Rob Sargent wrote:

O
Another problem is storage devices fail.  S3 storage lakes _should_ 
be checking your data integrity on a regular basis and possibly 
maintaining copies of it iin multiple locations so you're not 
vulnerable to a site disaster.

Tape FTW!!

Or WTF Tape??   :)


Tape is durable, long-lasting, high-density, under your control, can 
be taken off-site (don't underestimate the bandwidth of a station 
wagon full of tapes hurtling down the highway!) and -- with the proper 
software -- is multi-threaded.



Don't you mean multi-spooled??? :-)

Fascinating problem.  If the dump & load programs are designed to take a 
parameter for N drives for effective parallel operation, and N > 2, then 
things will run a lot faster.


I can think of several ways the the data can be dumped in parallel, with 
various trade-offs.  Would love to know how it's implemented in practice.



Cheers,
Gavn





Re: Work hours?

2019-08-29 Thread Gavin Flower

On 28/08/2019 15:22, Christopher Browne wrote:



On Tue, Aug 27, 2019, 6:27 PM stan > wrote:


I am just starting to explore the power of PostgreSQL's time and date
functionality. I must say they seem very powerful.

I need to write a function that, given a month, and a year as
input returns
the "work hours" in that month. In other words something like

8 * the count of all days in the range Monday to Friday) within that
calendar month.

Any thoughts as to the best way to approach this?


In data warehouse applications, they have the habit of creating tables 
that have various interesting attributes about dates.


https://medium.com/@duffn/creating-a-date-dimension-table-in-postgresql-af3f8e2941ac

I'd be inclined to solve this by defining various useful sets of 
dates; you might then attach relevant attributes to a dimension table 
like the d_date table in the article.


- a table with all weekdays (say, Monday to Friday)

- a table listing statutory holidays that likely need to be excluded

These are going to be small tables even if you put 10 years worth of 
dates in it.



[...]

You don't need a whole table for weekdays.  You can easily calculate the 
number of weekdays simply from knowing the first day of the month and 
how many days in a month.



Cheers,
Gavin





Re: Variable constants ?

2019-08-16 Thread Gavin Flower

On 16/08/2019 09:27, Rich Shepard wrote:

On Thu, 15 Aug 2019, stan wrote:


I need to put a few bossiness constants, such as a labor rate multiplier
in an application. I am adverse to hard coding these things. The best 
plan

i have come up with so far is to store them in a table, which would have
only 1 row, and a column for each needed constant.

Anyone have a better way to do this?

Failing a better way is there some way I can limit this table to only
allow one row to exist?


Stan,

I've resolved similar issues with changing regulatory agency staff. 
For your

application(s) I suggest a table like this:

create table labor_rate_mult (
  rate real primary_key,
  start_date  date not null,
  end_date    date
)

This provides both a history of labor rate multipliers and the ability to
select either the most current one or a previous one.

If other factors affect the rate, add attribute columns for them.

Regards,

Rich



I think a better approach is to:

 * include time
 * store independent of timezone (avoids problems with daylight saving)
 * only have one timestamp

   DROP TABLE IF EXISTS labour_rate_mult;


   CREATE TABLE labour_rate_mult
   (
        rate_name text,
        effective_start   timestamptz,
        rate_value    real,
        valid boolean,
        PRIMARY KEY (rate_name, effective_start)
   );


   INSERT INTO labour_rate_mult
   (
        rate_name,
        effective_start,
        rate_value,
        valid
   )
   VALUES  -- test data omits time for clarity
        ('junior', '2001-02-01', 4.2, true),
        ('junior', '2008-11-16', 6, true),
        ('junior', '2012-07-23', 4.5, true),
        ('junior', '2019-09-11', 3.7, true),
        ('junior', '2030-12-31', 0, false),
        ('adult', '2001-01-01', 8.4, true),
        ('adult', '2012-07-23', 9.9, true),
        ('adult', '2030-05-03', 0, false)
   /**/;/**/


   SELECT
        rate_value
   FROM
        labour_rate_mult
   WHERE
        rate_name = 'junior'
        AND effective_start <= '2012-07-23' -- stand in for
   CURRENT_TIMESTAMP
        AND valid
   ORDER BY
        effective_start DESC
   LIMIT 1
   /**/;/**/


Cheers.
Gavin

P.S.
Previously, I accidentally just sent it to Rich!





Re: Error XX000 After pg11 upgrade

2019-08-15 Thread Gavin Flower

On 16/08/2019 04:35, Simon Windsor wrote:

Hi

The Full system used to in an Oracle DB and was ported to Postgres 9.5 
about 2+ years ago, and partitioned using inheritance tables.


Since then pg_upgrade has been used to upgrade to pg10 (with apt 
upgrade to take to 10.5 occasionally).


Last week, pg_upgrade was againn used to upgrade to pg11.4.

Since then, large bulk inserts of configuration changes are failing 
with this Error, but adhoc and small changes are working ok.


The actual error is reported by a Java process

Caused by: org.postgresql.util.PSQLException: ERROR: cache lookup 
failed for type 22079
  Where: SQL statement "insert into configObjectsFull_2019 values 
(new.*)"
PL/pgSQL function configobjectsfull_insert_trigger() line 28 at SQL 
statement
    at 
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2310)

    at

The DB Tableand Trigger are attached.

As you can see, the id and timestamp are set on insert, and are used 
to determine the partition used.


This issue started after the upgrade to pg11, pg10 and pg9 had no 
problems.


Any ideas would be appreciated.



[...]

Probably will make no difference, but have you considered testing using 
pg 11.5?  It would at least rule out a lot of Red Herrings!



Cheers,
Gavin






Re: migrating from Oracle to PostgreSQL 11

2019-07-11 Thread Gavin Flower



On 11/07/2019 17:53, Hitesh Chadda wrote:

Hello,
The target is PostgreSQL 10.1. I would like to know possible approach 
that can be followed for doing the migration.


Regards
H.kumar



On Wednesday, June 26, 2019, Hitesh Chadda > wrote:


Hi PostgresSQL Support,

I have to migrate from Oracle 11g to PostgresSQL 11.4. Please
suggest best solution for doing the migration.

Regards

H.Kumar


Hitesh,

Please don't top post, as the convention in these lists is to add new 
stuff at the bottom.


Why not upgrade Oracle to PostgresSQL 11?


Cheers,
Gavin






Re: Too short field

2019-07-03 Thread Gavin Flower

On 03/07/2019 23:30, Karl Martin Skoldebrand wrote:


Hi,

We solved the problem of yesterday where I was looking at sequences. 
It eventually turned that sequence was irrelevant (at least in the 
PostgreSQL sense) to the problem.


Now, we have a bug in another application that prevents an automatic 
tool to enter certain users in the database. The organisational field 
is varchar(60) while the actual Organisation “abbreviation” may be as 
long as 70 characters (don’t ask why).


What happens to data if I simple redefine the table field as 
varchar(80) (or something, at least 70+). Does “everything” break 
database side or can I just go on running the app as is.


Do we need to restart databases or something else that requires an 
interrupted service?


Best regards,

Martin S



Disclaimer:  This message and the information contained herein is 
proprietary and confidential and subject to the Tech Mahindra policy 
statement, you may review the policy at 
http://www.techmahindra.com/Disclaimer.html externally 
http://tim.techmahindra.com/tim/disclaimer.html internally within 
TechMahindra.




Is there any reason to put a limit of the number of characters in the 
field in the database?


If not, consider using, the 'text' type.


Cheers,
Gavin





Re: General question about OS

2019-06-09 Thread Gavin Flower

On 10/06/2019 09:45, Drexl Spivey wrote:

Hello all,

Don't want to start one of those endless internet tug of wars without 
end threads, but would like some other people's opinions.


First off, I use all Operating systems without problems, personally 
defaulting to linux at home, but mostly mac at work. I use windows, 
when necessary, not my favorite.


It seems in my little database development experience that this is one 
area where windows might actually offer the best, most 
mature/developed choices. If this is the case, I should acclimate 
myself to it more.


I have found many applications have been ported to other systems, but 
they don't seem as "good", and some programs like Power Designer are 
windows only.


Is database work heavily windows leaning??
--
Sent from my Android device with K-9 Mail. Please excuse my brevity. 


I got into the game before Microsoft ever existed.  I find Linux far 
easier to use, for both general things and development.


Can't say I'm familiar with what Microsoft offers now, but it still 
seems to be a security nightmare and very restrictive.



Cheers,
Gavin





Re: Questions about btree_gin vs btree_gist for low cardinality columns

2019-06-01 Thread Gavin Flower

On 01/06/2019 14:52, Morris de Oryx wrote:
[...]
For an example, imagine an address table with 100M US street addresses 
with two character state abbreviations. So, say there are around 60 
values in there (the USPS is the mail system for a variety of US 
territories, possessions and friends in the Pacific.) Okay, so what's 
the best index type for state abbreviation? For the sake of argument, 
assume a normal distribution so something like FM (Federated States of 
Micronesia) is on a tail end and CA or NY are a whole lot more common.


[...]

I'd expect the distribution of values to be closer to a power law than 
the Normal distribution -- at very least a few states would have the 
most lookups.  But this is my gut feel, not based on any scientific 
analysis!



Cheers,
Gavin





Re: how to add more than 1600 columns in a table?

2019-04-24 Thread Gavin Flower

On 25/04/2019 10:55, Alvaro Herrera wrote:

On 2019-Apr-24, pabloa98 wrote:


Regarding to (2), We are good by adding a patch and recompile a patched
version for our server databases.

But we are open on helping to add thousands of columns support as a
compile-time parameter if there are other people interested.

It's hard to say what you're doing wrong when we don't know
what are you actually doing.

I think raising the limit requires changing ItemIdData, t_hoff, and a
few members of PageHeaderData at the very least.  Reading the three
header files involved carefully would probably point out areas I've
forgotten to mention.  I think if you enlarge t_hoff and lp_off/lp_len
to 16 bits, you can use 64kB blocks, which might be useful too.

Note that with pg12 you could have your own table AM that supported
wider ItemIds as a (small?) change on heapam, rather than supplant it
for all tables.  That way you would only pay the (probably considerable)
cost of the wider line pointers on all tables ...

I wonder if it might prove a killer feature for some niche uses! 
Stranger things have come to pass.


Suspect that going beyond 1600 columns would never be the default, even 
if the pg core devs, were happy to allow it as an official option 
(presumably at compile time?).  As I think it would have negative 
performance impacts on the most uses of pg - as Alvaro hinted at.  IMnsHO


Certainly the will be many people intrigued as to what you are trying to 
do, even if we never want to do the same ourselves.







Re: how to add more than 1600 columns in a table?

2019-04-24 Thread Gavin Flower

On 25/04/2019 10:11, pabloa98 wrote:

Thank you Joe! I will take a look

Pablo

On Wed, Apr 24, 2019 at 1:47 PM Joe Conway > wrote:


[...]

Hi Pablo,

The convention here is to bottom post, or to intersperse comments, like 
in all the replies to you.


So it would be appreciated if you did that, rather than top post as you 
have been doing.


I strongly suspect that:

   (1) making pg handle more than 1600 columns, would be way more
   complicated than you can imagine

   (2) suich a change would be unlikely to be accepted into the main
   line, which would mean you'd have to reapply your patch for every
   new version of pg you wanted to use!


Cheers,
Gavin





Re: Multicolumn index for single-column queries?

2019-04-18 Thread Gavin Flower

On 19/04/2019 14:01, Gavin Flower wrote:
[...]


Also there will be fewer index entries per block for the multi_index, 
which is why the I/O count will be higher even in the best case where 
there is an equal number of rows referenced by each index entry.



Not sure why my system had this still in my draft folder!


Sorry, for the duplication...





Re: Multicolumn index for single-column queries?

2019-04-18 Thread Gavin Flower

On 19/04/2019 01:47, Harald Fuchs wrote:

Andreas Kretschmer  writes:


Am 18.04.19 um 08:52 schrieb rihad:

Hi. Say there are 2 indexes:

     "foo_index" btree (foo_id)

     "multi_index" btree (foo_id, approved, expires_at)


foo_id is an integer. Some queries involve all three columns in
their WHERE clauses, some involve only foo_id.
Would it be ok from general performance standpoint to remove
foo_index and rely only on multi_index? I know that
PG would have to do less work updating just one index compared to
updating them both, but wouldn't searches
on foo_id alone become slower?

it depends .

it depends on the queries you are using, on your workload. a
multi-column-index will be large than an index over just one column,
therefore you will have more disk-io when you read from such an index.

I think it also depends on the average number of rows having the same foo_id.

The number of rows referenced by an index entry for the multi_index will 
always be less than or equal to those for the matching foo_index.


Also there will be fewer index entries per block for the multi_index, 
which is why the I/O count will be higher even in the best case where 
there is an equal number of rows referenced by each index entry.






Re: Multicolumn index for single-column queries?

2019-04-18 Thread Gavin Flower

On 19/04/2019 01:47, Harald Fuchs wrote:

Andreas Kretschmer  writes:


Am 18.04.19 um 08:52 schrieb rihad:

Hi. Say there are 2 indexes:

     "foo_index" btree (foo_id)

     "multi_index" btree (foo_id, approved, expires_at)


foo_id is an integer. Some queries involve all three columns in
their WHERE clauses, some involve only foo_id.
Would it be ok from general performance standpoint to remove
foo_index and rely only on multi_index? I know that
PG would have to do less work updating just one index compared to
updating them both, but wouldn't searches
on foo_id alone become slower?

it depends .

it depends on the queries you are using, on your workload. a
multi-column-index will be large than an index over just one column,
therefore you will have more disk-io when you read from such an index.

I think it also depends on the average number of rows having the same foo_id.

The number of rows referenced by an index entry for the multi_index will 
always be less than or equal to those for the matching foo_index.


Also there will be fewer index entries per block for the multi_index.  
Which is why the I/O count will be higher; even in the best case, where 
there is an equal row referenced by the index entries.






Re: Multicolumn index for single-column queries?

2019-04-18 Thread Gavin Flower

On 19/04/2019 02:12, Ron wrote:

On 4/18/19 8:45 AM, Gavin Flower wrote:

On 19/04/2019 01:24, Ron wrote:

On 4/18/19 2:14 AM, Andreas Kretschmer wrote:

[snip]


(Prefix compression would obviate the need for this question. Then 
your multi-column index would be *much* smaller.)


True, but a multi column index will still be bigger than single 
column index.


TANSTAAFL.



QUOTE: [Oh, 'tanstaafl.' Means ~There ain't no such thing as a free lunch.']
From The Moon is a Harsh Mistress, by Robert Heinlein. Published 1966
Is where I first came across TANSTAAFL.

However, it appears to have been used at least as early as 1949.

Just adding this, as probably there are many people who don't know the 
acronym.






Re: Multicolumn index for single-column queries?

2019-04-18 Thread Gavin Flower

On 19/04/2019 01:24, Ron wrote:

On 4/18/19 2:14 AM, Andreas Kretschmer wrote:



Am 18.04.19 um 08:52 schrieb rihad:

Hi. Say there are 2 indexes:

    "foo_index" btree (foo_id)

    "multi_index" btree (foo_id, approved, expires_at)


foo_id is an integer. Some queries involve all three columns in 
their WHERE clauses, some involve only foo_id.
Would it be ok from general performance standpoint to remove 
foo_index and rely only on multi_index? I know that
PG would have to do less work updating just one index compared to 
updating them both, but wouldn't searches
on foo_id alone become slower? 


it depends .

it depends on the queries you are using, on your workload. a 
multi-column-index will be large than an index over just one column,

therefore you will have more disk-io when you read from such an index.


But two indexes are larger than one index, and updating two indexes 
requires more disk IO than updating one index.


Agreed.

A key question would be: how often is the query run, compared to the 
frequency Insertions, Updates, and Deletions -- wrt the table.




(Prefix compression would obviate the need for this question. Then 
your multi-column index would be *much* smaller.)


True, but a multi column index will still be bigger than single column 
index.


[...]




Re: Multicolumn index for single-column queries?

2019-04-18 Thread Gavin Flower

On 18/04/2019 18:52, rihad wrote:

Hi. Say there are 2 indexes:

    "foo_index" btree (foo_id)

    "multi_index" btree (foo_id, approved, expires_at)


foo_id is an integer. Some queries involve all three columns in their 
WHERE clauses, some involve only foo_id.
Would it be ok from general performance standpoint to remove foo_index 
and rely only on multi_index? I know that
PG would have to do less work updating just one index compared to 
updating them both, but wouldn't searches

on foo_id alone become slower?

Thanks.



The multi column index will require more RAM to hold it.  So if there is 
memory contention, then there would be an increased risk of swapping, 
leading to slower query times.


I suspect that if there is more than enough RAM, then a multi column 
index will be slightly slower than a single column index. However, the 
difference will probably be lost in the noise -- in other words, the 
various things happening in the background will most likely to have far 
more significant impact on query duration.  IMHO



Cheers,
Gavin






Re: AW: Postgres Enhancement Request

2019-03-20 Thread Gavin Flower

Hi Markus,

Please see comment at the bottonm of this email!

On 21/03/2019 05:36, Zwettler Markus (OIZ) wrote:

Yes, that would be totally ok. Like the "with [grant|admin] option" privilege 
model in SQL. It should be done with all these predefined top-level database roles like 
CREATEROLE.

It's doesn't only seem bogus but also a security hole when users can get 
privileges they have never been granted.

Markus



[...]

A way of indicating content has been omitted!

In ancient times, early 1990's  '[ omitted ]' was used, but I started 
the trend of using '[...]'.



Hmm.  Thinking about it a bit more carefully, it does seem bogus that a role that has 
CREATEROLE but not CREATEDB can make a role that has the latter privilege.  It would be 
more sensible to have a uniform rule that "you can't grant a privilege you don't 
have yourself", which would mean that the OP's problem could perhaps be solved by 
making a role that has CREATEROLE but not CREATEDB.

You could imagine going further and applying the full SQL privilege model to 
these things, which would make it possible to have a role that has CREATEDB (so 
can make DBs itself) but can't pass that privilege on to others for lack of 
grant options on CREATEDB.
But that would be a very much bigger chunk of work, and I'm not sure I see the 
payback.

regards, tom lane



In the postgres groups, please bottom post, as that is the convention here.

Bottom posting makes it easier to follow what is happening.

You can also intersperse comments an omit chunks that are no longer 
relevant.



Thanks,
Gavin




Re: POSTGRES/MYSQL

2019-03-11 Thread Gavin Flower

On 12/03/2019 09:40, Thiemo Kellner wrote:


Quoting Adrian Klaver :


On 3/11/19 9:31 AM, Sonam Sharma wrote:

Hi Adrian,
Ours is retail company and the DB size is Max 30gb, currently we are 
using db2.


Things to consider:

1) Migration tools for DB2 --> MySQL/Postgresql. I have not done 
this, so someone else will have to comment.


2) The clients/frameworks/ORMs you use now to connect to the 
database. Do they also work with Postgresql/MySQL?


It is also worth to consider if the architecture/model of DB2 fits 
PostgreSQL/MySQL. And while at it, how about getting rid of all the 
itching quirks of the current solution anyway? I see the danger of 
getting disappointed by any of PostgreSQL/MySQL if the current 
solution uses DB2 features that cannot be easily mapped to any of the 
contenders features.


Bottom line of my insinuation is that the migration tool could be less 
an point if you get the opportunity to overhaul your application.


Kind two dimes

Thiemo


Yes, I'm aware that different RDDBMS's having their own quirks that 
people either exploit as benefits, or have to work around in a new DB.  
So naively converting one DB to another may have huge performance hits, 
and possible unexpected results (MySQL, I'm looking at you!) -- even if 
both DB's were technically equally good!


I think a conversion tool is a good starting point.  However, I strongly 
agree with Thiemo, that you should carefully review your existing 
databases design/implementation -- so as to take maximum advantage of 
the facilities of progress, and avoid any pitfalls created by naively 
importing isms that are specific to your old db -- that might well be 
counter productive in PostgreSQL.



Cheers,
Gavin





Re: POSTGRES/MYSQL

2019-03-11 Thread Gavin Flower

On 12/03/2019 05:35, Michael Nolan wrote:
[...]
 MySQL is better at isolating users from each other and requires less 
expertise to administer.


[...]

I keep reading that MySQL is easier to administer, but never seen any 
evidence of that.  And in my very limited experience of both, I've found 
PostgreSQL easier to set up & administer.


From what I've read about problems with MySQL, I think that if you 
value your data, just don't use MySQL.   At least 4 times, since 2001, 
I've searched for PostgreSQL vs MySQL comparisons, and each time found 
PostgreSQL to be superior in terms of performance, reliability, 
robustness, and ease of use.


There is a definite trend of people moving from MySQL to PostgreSQL, and 
its not just because of Oracle (MySQL diehards are moving to MariaDB).


I have a lot more confidence in PostgreSQL, than MySQL/MariaDB.


Cheers,
Gavin


P.S. Don't top post!  As commenting at the bottom, is the norm for 
PostgreSQL mailing lists.





Re: simple division

2018-12-05 Thread Gavin Flower

On 06/12/2018 02:32, Adrian Klaver wrote:

On 12/5/18 4:45 AM, Gavin Flower wrote:

On 06/12/2018 00:05, Geoff Winkless wrote:
On Wed, 5 Dec 2018 at 09:13, Gavin Flower 
 wrote:

SELECT ceil(10/4.0);



Geoff

If you divide one integer by another, then it is logical to get an 
integer as as the answer.


Hmm, grab any of my calculators and divide 10/4 and get 2.5. Seems not 
everybody agrees with that logic:)



Calculators normally work in floating point (in fact, as far as I am 
aware, they never work in integer mode by default) , unless you are 
either doing symbolic maths or numbers that are integer based like 
hexadecimal.


So your example does not contrdict what I said.





Re: simple division

2018-12-05 Thread Gavin Flower

On 06/12/2018 00:05, Geoff Winkless wrote:

On Wed, 5 Dec 2018 at 09:13, Gavin Flower  wrote:

SELECT ceil(10/4.0);

Is what you want for that example.

Except that implies that "number of people who can fit in a car" is a
real number, not a whole.

IMO it's fundamentally broken that SQL doesn't cast the result of a
divide into a numeric value - the potential for unexpected errors
creeping into calculations is huge; however that's the standard and
no-one's going to change it now.

Having said that it's worth noting that those in the Other Place think
that it's broken enough to go against the standard (they have a DIV b
for integer divide and a/b for float).

Geoff

If you divide one integer by another, then it is logical to get an 
integer as as the answer.






Re: simple division

2018-12-05 Thread Gavin Flower

On 05/12/2018 20:07, Rob Sargent wrote:



On Dec 4, 2018, at 9:33 PM, Gavin Flower  wrote:


On 05/12/2018 10:51, Rob Sargent wrote:


On 12/4/18 2:36 PM, Martin Mueller wrote:
It worked, and I must have done something wrong. I'm probably not the only 
person who would find something like the following helpful:


division (integer division truncates the result)10/33

The math types might take offense here, with the use of "truncates".  Integer division 
really ask how many times can one subtract the numerator from the denominator without going 
negative (or how many times does the numerator "go into" the denominator).

It may seem a nuisance, but int division is a useful construct and must be 
supported (and be the default). (If you have 10 people to transport in cars 
which hold four (all can drive) 10/4 = 3 ;) )





Hmm...

10 / 4 = 2


And two are left stranded!
  The point is that integer math has its place. You cant have 2.5 cars. So 10/4 
in this context is 3.
More correctly the calculation is
10/4 + 10%4>0 ? 1 :0 = 3

(Maybe psql does have % so mod(10,4))



SELECT ceil(10/4.0);

Is what you want for that example.





Re: simple division

2018-12-04 Thread Gavin Flower

On 05/12/2018 10:51, Rob Sargent wrote:


On 12/4/18 2:36 PM, Martin Mueller wrote:
It worked, and I must have done something wrong. I'm probably not the 
only person who would find something like the following helpful:



division (integer division truncates the result)    10/3    3


The math types might take offense here, with the use of "truncates".  
Integer division really ask how many times can one subtract the 
numerator from the denominator without going negative (or how many 
times does the numerator "go into" the denominator).


It may seem a nuisance, but int division is a useful construct and 
must be supported (and be the default). (If you have 10 people to 
transport in cars which hold four (all can drive) 10/4 = 3 ;) )






Hmm...

10 / 4 = 2





Re: surprising query optimisation

2018-11-30 Thread Gavin Flower

On 01/12/2018 04:33, Stephen Frost wrote:

Greetings,

* Chris Withers (ch...@withers.org) wrote:

On 28/11/2018 22:49, Stephen Frost wrote:

* Chris Withers (ch...@withers.org) wrote:

We have an app that deals with a lot of queries, and we've been slowly
seeing performance issues emerge. We take a lot of free form queries from
users and stumbled upon a very surprising optimisation.

So, we have a 'state' column which is a 3 character string column with an
index on it. Despite being a string, this column is only used to store one
of three values: 'NEW', 'ACK', or 'RSV'.

Sounds like a horrible field to have an index on.

That's counter-intuitive for me. What leads you to say this and what would
you do/recommend instead?

For this, specifically, it's because you end up with exactly what you
have: a large index with tons of duplicate values.  Indexes are
particularly good when you have high-cardinality fields.  Now, if you
have a skewed index, where there's one popular value and a few much less
popular ones, then that's where you really want a partial index (as I
suggest earlier) so that queries against the non-popular value(s) is
able to use the index and the index is much smaller.

Of course, for this to work you need to set up the partial index
correctly and make sure that your queries end up using it.

Thanks!

Stephen


An index simply tells pg which block to look at (assuming that the index 
itself is not sufficient to satisfy the query), so if using an index 
would still require that pg look at most blocks, then it cheaper to just 
scan the whole table - rather than load the index and still look at all 
blocks that contain the table data.  I've oversimplified slightly.


An index is best used when using it results in fewer blocks being read 
from disk.


Also the use of RAM is better when the size of the index is kept small.  
For example having an index on sex for nurses is a waste of time because 
most nurses are female.  However, a partial index (as suggested by 
Stephen, for your query) that includes only males could be useful if you 
have queries looking for male nurses (assumes male nurses are a very 
small fraction of nurses such that most data blocks don't have rows for 
males nurses, and the planner knows this).


I once optimised a very complex set queries that made extensive use of 
indexes.  However, with the knowledge I have today, I would have most 
likely had fewer and smaller indexes.  As I now realize, that some of my 
indexes were probably counter productive, especially as I'd given no 
thought to how much RAM would be required to host the data plus 
indexes!  Fortunately, while the objective was to run all those queries 
within 24 hours, they actually only took a couple of hours.


Chris, I would strongly suggest, you read up on the excellent 
documentation pg has about indexes, but don't expect to understand it 
all at one sitting...



Cheers,
Gavin




Re: surprising query optimisation

2018-11-28 Thread Gavin Flower

On 29/11/2018 11:26, Chris Withers wrote:

Hi All,

We have an app that deals with a lot of queries, and we've been slowly 
seeing performance issues emerge. We take a lot of free form queries 
from users and stumbled upon a very surprising optimisation.


So, we have a 'state' column which is a 3 character string column with 
an index on it. Despite being a string, this column is only used to 
store one of three values: 'NEW', 'ACK', or 'RSV'.


One of our most common queries clauses is "state!='RSV'" and we've 
found that by substituting this clause with "state='ACK' or 
state='NEW'" wherever it was used, we've dropped the postgres server's 
load average from 20 down to 4 and the CPU usage from 60% in user 
space down to <5%.


This seems counter-intuitive to me, so thought I'd ask here. Why would 
this be likely to make such a difference? We're currently on 9.4, is 
this something that's likely to be different (better? worse?) if we 
got all the way up to 10 or 11?


cheers,

Chris



At a guess...

    "state!='RSV'"  ==> pg only has to check one value

and

    "state='ACK' or state='NEW'"   ==> pg has to check two values

so I would expect the '!=' to be faster.


Cheers,
Gavin




Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-10 Thread Gavin Flower

On 11/07/18 11:04, Hustler DBA wrote:

Thanks Adrian and Rich,
I will propose sqitch to the client, but I think they want something 
with a GUI frontend.


They want to deploy database changes, track which environments the 
change was deployed to, be able to rollback a change (with a rollback 
script), track when and if the change was rolled back and in which 
environment/database... so pretty much a deployment and tracking GUI 
software with a frontend.


In the past, for doing database deployments to Oracle, I created a 
tool using PHP (frontend/backend), MySQL (repository to track 
deployments and store deployment logs) and scripted the release 
scripts for deployment and rollback, and had my tool manage the 
scripts up the environments. The client is "looking" for something 
more open source for PostgreSQL. Do we have anything similar to this?


Neil

On Tue, Jul 10, 2018 at 6:22 PM, Rich Shepard 
mailto:rshep...@appl-ecosys.com>> wrote:


On Tue, 10 Jul 2018, Hustler DBA wrote:

A client of mine is looking for an open source tool to deploy
and promote
PostgreSQL DDL changes through database environments as part
of SDLC. What
tools (open source) does the community members use? I normally use
scripts, but they want something open source.


Neil,

  I'm far from a professional DBA, but scripts are certainly open
source
because they're text files.

Being text files has nothing to do with being Open Source!  As I could 
send you a BASH script, or Java source code of a program, where they are 
under a Proprietary licence.


On the other hand, being a script could be open source, it all depends 
on the licence!




  To track changes for almoste everything I highly recommend Git
for version
control. It's distributed and can handle most types of files. I
use it for
tracking coding projects and well as report and other text
documents that
are edited and revised prior to release.

Rich



Hi Neil,

Please bottom post, as that is the convention in these lists. This 
convention allows people to read the history, before reading the reply.  
Alternatively, you can intersperse your comments if that makes the 
context  easier to follow.  You can also omit large chunks that are no 
longer relevant, replacing them with "[...]".


Note that using scripts makes it easier to automate and to document, 
plus it gives you far more control.  With PostgreSQL I use psql, as it 
is easier to use than any GUI tool.  I use an editer to create SQL 
scripts and execute them from psql.  Note that you can use psql to 
execute SQL from within a BASH script.


Scripts once working and tested, can be reused and stored in git.  This 
is not something you can do with actions in a GUI!



Cheers,
Gavin




Re: Database name with semicolon

2018-06-29 Thread Gavin Flower

On 29/06/18 04:22, joby.john@nccgroup.trust wrote:


Pavel Stehule wrote
2018-06-28 12:10 GMT+02:00 please, can you try '"db; name"' ..
double quotes nested in apostrophes

I tried this with no luck. Also tried the other way, apostrophe nested 
in double quotes with same results. Looks like an issue in ODBC driver 
as it's not handling semicolon within database name. Probably needs to 
go back to the client asking to rename the database.


Sent from the PostgreSQL - general mailing list archive 
 
at Nabble.com.


Reminds me of:
    https://xkcd.com/327/
'Little Bobby Tables'




Re: Code of Conduct plan

2018-06-07 Thread Gavin Flower

On 08/06/18 17:09, Christophe Pettus wrote:
[...]


It is equally unlikely that the Code of Conduct committee will need to decide 
what a car is, or whether or not someone has succeeded at university.

I'm not trying to be snide, but this does seem to be exactly what I was talking 
about: When asked for examples of cultural differences that might run afoul of 
the CoC, the examples don't seem to be either relevant (i.e., they are not 
things the CoC committee will have to address), or are clearly contextual in a 
way that a human will have no trouble understanding.
I was simply pointing out the problems with definitions.  The examples 
were chosen to show the problems exist even when the subject matter is 
not normally considered controversial.





I've called a friend of mine a bastard, but he took it as a mark of respect in 
the context of our discussion.

This is why we have human beings, rather than a regex, forming the Code of Conduct 
committee.  It's important to remember that the CoC committee is not going to be going 
around policing the community for potential violations; their job is to resolve actual 
situations between real people.  It's not their job to define values; it's their job to 
resolve situations.  In my experience in dealing with CoC issues, the situations (while 
often complex) are rarely of the form, "This word does not mean anything bad where I 
come from."


I've read emails from Sarah Sharpe, and seen her harangue Linus (I was 
standing about a metre away from them).  Sarah was essentially trying to 
insist that Linus follow a CoC.  The pg lists are remarkable tame, 
compared to some I read.  Linus is quite entertaining at times, but most 
people appreciate where he is coming from even when they are the target 
of one of his rants.  I've immense respect for Linus, but he'd likely 
fall foul of most CoC's!




--
-- Christophe Pettus
x...@thebuild.com


Cheers,
Gavin




Re: Code of Conduct plan

2018-06-07 Thread Gavin Flower

On 08/06/18 16:55, Ron wrote:

On 06/07/2018 04:55 AM, Gavin Flower wrote:
[snip]
The Americans often seem to act as though most people lived in the 
USA, therefore we should all be bound by what they think is correct!


"You" are wearing a tee-shirt (or hoodie), blue jeans and Nikes, while 
eating a fast food hamburger, drinking a Coke, listening to rock and 
roll, emailing us over the Internet from your Mac, thinking all Men 
are created equal, and feeling glad that NZ isn't an English colony.


That kind of cultural dominance makes one think the US truly is 
exceptional.


Only two of those things you said about me are currently true, and some 
are never true.


Perhaps accusing someone as being a Mac user should be banned by the CoC?




Re: Code of Conduct plan

2018-06-07 Thread Gavin Flower

On 08/06/18 14:21, Christophe Pettus wrote:

On Jun 7, 2018, at 02:55, Gavin Flower  wrote:
The Americans often seem to act as though most people lived in the USA, 
therefore we should all be bound by what they think is correct!

I have to say that this seems like a red herring to me.
Not entirely.  American web sites tend to insist on weird date format, 
and insist on the archaic imperial units rather than the metric system 
that most people in the world use.  There were also more cultural 
clashes, long before Trump got elected.  I'm English, and I'm very aware 
of the arrogance we showed when we had an Empire.  The Americans don't 
seem to have learnt from the mistakes the British made.


If you selected 3 teams of 4, for each of the countries USA, France, and 
Japan -- isolated each team and asked them to draw of a Code-of-Conduct, 
they would clash.  Mind you, they'd probably clash if you selected 3 
teams from different parts of the USA!




1. The CoC committee handles actual incidents involving real people.  It's not 
their job to boil the ocean and create a new world; they deal with the matters 
brought before them.  I have no reason to believe that they will not apply good 
sense and judgement to the handling of the specific cases.

2. I don't think that there is a country where someone being driven out of a 
technical community by harassment is an acceptable local value.
True, but defining acceptable values is way more difficult than it 
looks, as are definitions in general.


For example try defining something simple, like what is a car! EVERYBODY 
knows what a car is right?  It is not something controversial that 
affects people's religious beliefs (car nuts excepted!).  You will find 
it incredible difficult to have a definition that includes everything 
that you consider a car, and exclude everything that you don't think is 
a car.  A colleague once had a car that only had 3 road wheels, ever 
come across that before???


Try defining success at university, it is downright impossible if you 
consider it with sufficient care -- yet people often act like there is a 
clear cut definition, they think it is so obvious they usually don't 
bother attempting to define it.  If a girl enrols in 3 courses at a 
university and completes them, but lives for 70 years without further 
study -- has she failed because she never got a degree?




3. The only actual real-life example of a culture clash that I've seen offered up here is 
the ability to say "c*nt" on a technical mailing list about databases.  That 
seems a very strange and specific hill to choose to die on in this discussion.
I agree that such words have no place in a discussion of databases, 
except when they do!


There was once a company that wrote an adventure game that refused to 
accept rude words, so people went out of their way to look for ones it 
didn't know about.  So their action had consequences opposite to their 
intentions.


Saying people should never denigrate others seems straightforward and 
noble until you look at things in detail.  I've called a friend of mine 
a bastard, but he took it as a mark of respect in the context of our 
discussion.




--
-- Christophe Pettus
x...@thebuild.com


I think a written code of conduct is laudable, but  impracticable in 
reality, even if "Politically Correct".



Cheers,
Gavin



Re: Code of Conduct plan

2018-06-07 Thread Gavin Flower

On 07/06/18 21:49, Raymond O'Donnell wrote:

On 07/06/18 09:04, Pablo Hendrickx wrote:
You don't have to be a magician to predict this is going to harm the 
community.


Please keep your American social politics out of Postgres, thank you!


As a long-time lurker and occasional participant on this list, I don't 
think this has ever been an issue, in my experience anyway. There 
might be an occasional turn of phrase which I have to parse a bit, but 
that's about it. :-)


Ray.

The Americans often seem to act as though most people lived in the USA, 
therefore we should all be bound by what they think is correct!



Cheers,
Gavin




Re: Code of Conduct plan

2018-06-04 Thread Gavin Flower

On 05/06/18 06:41, Jason Petersen wrote:
On Jun 4, 2018, at 10:59 AM, Joshua D. Drake > wrote:


"snowflake", I find that term hilarious others find it highly 
offensive. Which is correct?


I don’t think it’s offensive but it plainly fails your “if you 
wouldn’t say it to a client, don’t say it here” test.


Generally we so-called “snowflakes” aren’t the ones raising hell about 
CoCs, which is the third rail I’ve seen most likely to set off the 
actually hypersensitive types who fling this so-called insult around.


To be honest, examples like “sacrifice a willing virgin” or “offering 
my first born […]”, etc. do not contribute to conversations but rather 
bury rhetorical and technical weaknesses under a top layer of 
historical/emotional semiotic thatch that must be cut through to 
appropriately determine the validity of an argument. I do not 
understand what one might hope to preserve by ensuring users of such 
phrases are permitted to continue putting up such smokescreens.

Dem der big words you be using!  You are over analysing.

Nothings buried!

Note that in discussing the CoC, I've not used colourful language as 
part of, nor instead of, any argument I've presented -- other than as 
examples.


Any real difficulties would be mentioned explicitly, if not already 
known by the listener.


If a rational argument is needed, it can/will be provided. Colourful 
language is no substitute for valid arguments, that we are agreed. Nor 
should it be used as a smokescreen.


Colourful language makes conversation less stilted when used 
'appropriately', and can help bonding.  A lot depends on context.


One place where I worked, I pretended to blame people for things outside 
their control.  There were 4 people I didn't indulge such humour too:  
the manager (it may well have been his fault, and he would likely take 
it badly in any case), the technical manager (he was too stressed), and 
2 people who obviously did not appreciate that kind of humour.  Others 
had no problem with it.


With some friends/colleagues I've used grossly offensive language. 
However, in the context it's been taking in the spirit intended and not 
at face value.  Though, I'm careful not to overdo it, and not every time 
we spoke.  There are things I  might say face-to-face, that I would not 
write in an email -- as I've no idea of how the reader might be feeling 
when they read, context and body language are important to consider.




Ultimately, the important thing this CoC provides is some concrete 
language to point at when a party is aggrieved and explicit avenues of 
redress available when one refuses to address one’s own behavior. 
We’re adults here, the strawmen of people being harangued out of the 
community because they said a bad word are unlikely to materialize.


+1


If we are all adults, then we don't need a CoC.

I fear that the CoC is likely to be misused.

Have seen many heated arguments in these lists, but none that got out of 
hand.


I strongly feel that a CoC is neither needed nor useful here. It is a 
Politically Correct check list item.


-100


Cheers,
Gavin



Re: Code of Conduct plan

2018-06-03 Thread Gavin Flower

On 04/06/18 07:32, Adrian Klaver wrote:

On 06/03/2018 11:29 AM, Tom Lane wrote:

Two years ago, there was considerable discussion about creating a
Code of Conduct for the Postgres community, as a result of which
the core team announced a plan to create an exploration committee
to draft a CoC [1].  That process has taken far longer than expected,
but the committee has not been idle.  They worked through many comments
and many drafts to produce a version that seems acceptable in the view
of the core team.  This final(?) draft can be found at

https://wiki.postgresql.org/wiki/Code_of_Conduct

We are now asking for a final round of community comments.
Please send any public comments to the pgsql-general list (only).
If you wish to make a private comment, you may send it to
c...@postgresql.org.

The initial membership of the CoC committee will be announced 
separately,

but shortly.

Unless there are substantial objections, or nontrivial changes as a 
result

of this round of comments, we anticipate making the CoC official as of
July 1 2018.


My comments:

1) Reiterate my contention that this is a solution is search of 
problem. Still it looks like it is going forward, so see below.


2) "... engaging in behavior that may bring the PostgreSQL project 
into disrepute, ..."
This to me is overly broad and pulls in actions that may happen 
outside the community. Those if they are actually an issue should be 
handled where they occur not here.


3) "... members must be sensitive to conduct that may be considered 
offensive by fellow members and must refrain from engaging in such 
conduct. "
Again overly broad, especially given the hypersensitivity of people 
these days. I have found that it is enough to disagree with someone to 
have it called offensive. This section should be removed as proscribed 
behavior is called out in detail in the paragraphs above it.
I might possibly say that "I'm the master in this area" when talking to 
someone on a technical subject.  In the sense that I'm better at that 
particular skill, but some hypersensitive American could get their 
knickers in a twist (notice, that in this context, no gender is implied 
-- also in using that that expression "get their knickers in a twist" 
could offend some snowflake) claiming that I'm suggesting that whoever 
I'm talking to is my slave!  I heard of an American university that 
doesn't want people to use the term master, like in an MSc, because of 
the history of slavery.


I've used the expressions "sacrifice a willing virgin" and "offering my 
first born to the gods" as ways to ensure success of resolving a 
technical issue.  The people I say that to, know what I mean -- and they 
implicitly know that I'm not seriously suggesting such conduct.  Yet, if 
I wrote that publicly, it is conceivable that someone might object!


There are a lot of words and phrases that are okay in some cultures, but 
may be offensive in others -- even within the ame country.


Consider a past advertising campaign in Australia to sell government 
Bonds.  They used two very common hand gestures that are very 
Australian.  Bond sales dropped.  On investigation, they found the bonds 
were mainly bought by old Greek people, who found the gestures obscene.  
The gestures?  Thumbs up, and the okay gesture formed by touching the 
thumb with the next finger -- nothing sexually suggestive to most 
Australians, but traditional Greeks found them offensive.


You should look at the hoohaa over what Linus Torvalds says.  I've read 
several of his posts and seen videos were he has been less than polite.  
But I know when he is coming from.  If Linus was rude to me, I would be 
chuffed, because than I'd know I was good enough for him to reply to me, 
but that either I could have done better or that Linus was wrong.  For 
example see the email exchange with the infamous Sarah Sharp 
https://lkml.org/lkml/2013/7/15/407.  At the 2015 Australian Linux 
Conference, I watched as Sarah harangued Linus for over twenty minutes, 
Linus kept calm and polite throughout.


So common words and phrases could be offensive to some people. Sometimes 
people just need to let of stream.


You could end up with people being excessively polite to show their 
displeasure.  Come across the expression "icely polite" -- it was a way 
of showing contempt while denying the victim any excuse for a deadly 
duel!  Which would lead to the issue that people wouldn't always know if 
the politeness was real, or if it was intended to show disdain.


Be very careful in attempting to codify 'correct' behaviour!


Cheers,
Gavin





    regards, tom lane

[1] https://www.postgresql.org/message-id/56a8516b.8000...@agliodbs.com










Re: Index/trigger implementation for accessing latest records

2018-05-02 Thread Gavin Flower

Hi Alastair,

See embedded comments.

On 02/05/18 21:51, Alastair McKinley wrote:


Hi,


I have a table that stores a location identifier per person which will 
be appended to many times.


However, for many queries in this system we only need to know the most 
recent location per person, which is limited to about 1000 records.



Is the following trigger/index strategy a reasonable and safe approach 
to fast access to the latest location records per person?



 1. A boolean column (latest_record default true) to identify the
latest record per person
 2. A before insert trigger that updates all other records for that
person to latest_record = false
 3. A partial index on the latest_record column where latest_record is
true

Suggest simplest and fastest is to use timestamptz, a timestamp with 
time zone (copes with changes of daylight saving and different 
timezones. The you have no need of triggers.


Then all you need to do, is search for the person-id with the maximum 
value of the timestampz!


[...]


Cheers,
Gavin



Re: Enforce primary key on every table during dev?

2018-03-01 Thread Gavin Flower

On 02/03/18 06:47, Daevor The Devoted wrote:


On Thu, Mar 1, 2018 at 2:07 PM, Rakesh Kumar > wrote:



>Adding a surrogate key to such a table just adds overhead,
although that could be useful
>in case specific rows need updating or deleting without also
modifying the other rows with
>that same data - normally, only insertions and selections happen
on such tables though,
>and updates or deletes are absolutely forbidden - corrections
happen by inserting rows with
>an opposite transaction.

I routinely add surrogate keys like serial col to a table already
having a nice candidate keys
to make it easy to join tables.  SQL starts looking ungainly when
you have a 3 col primary
key and need to join it with child tables.


I was always of the opinion that a mandatory surrogate key (as you 
describe) is good practice.
Sure there may be a unique key according to business logic (which may 
be consist of those "ungainly" multiple columns), but guess what, 
business logic changes, and then you're screwed! So using a primary 
key whose sole purpose is to be a primary key makes perfect sense to me.


I once worked in a data base that had primary keys of at least 4 
columns, all character fields, Primary Key could easily exceed 45 
characters.  Parent child structure was at least 4 deep.


A child table only needs to know its parent, so there is no logical need 
to include its parent and higher tables primary keys, and then have to 
add a field to make the composite primary key unique!  So if every table 
has int (or long) primary keys, then a child only need a single field to 
reference its parent.


Some apparently safe Natural Keys might change unexpectedly.  A few 
years aback there was a long thread on Natural versus Surrogate keys - 
plenty of examples were using Natural Keys can give grief when they had 
to be changed!  I think it best to isolate a database from external 
changes as much as is practicable.


Surrogate keys also simply coding, be it in SQL or Java, or whatever 
language is flavour of the month.  Also it makes setting up testdata and 
debugging easier.


I almost invariably define a Surrogate key when I design tables.


Cheers,
Gavin





Re: To all who wish to unsubscribe

2017-11-20 Thread Gavin Flower

On 21/11/17 09:20, Magnus Hagander wrote:
On Mon, Nov 20, 2017 at 9:07 PM, Joshua D. Drake > wrote:


On 11/20/2017 12:03 PM, Tom Lane wrote:

Unfortunately, the removal of the footer is a feature not a bug.
In order to be DKIM-compatible and thus help avoid becoming
classified
as spammers, we can't mangle message content anymore, just like we
can't mangle the Subject: line.


Ugh, o.k.

In principle, the List-Unsubscribe: headers that are now
included in
mailing list headers allow MUAs to offer convenient unsubscribe
buttons.  Not sure how many of the people who are complaining use
mail agents that don't handle that.


I use Thunderbird which I imagine most people on the lists are
using. I can't find where these would work to unsubscribe.


Not even remotely. People use gmail. See 
https://blog.hagander.net/mail-agents-in-the-postgresql-community-233/ 
.


And gmail does automatically show an unsubscribe link on these mails. 
See attached screenshot for the mail from Jonathan earlier today as an 
example.


--
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Not everyone uses gmail!

I would never recommend it use for either for personal or business use - 
as Googol scans it for commercial [purposes.



Cheers,
Gavin