Re: Query to get name a data type of a view

2020-05-22 Thread David G. Johnston
On Friday, May 22, 2020, stan  wrote:

> When I run the following query,
>
>
> SELECT column_name,data_type
> FROM information_schema.columns
> WHERE table_name = 'mfg_part_view';
>
>
> I get the following result:
>
>  column_name  | data_type
>  --+---
>   mfg  | USER-DEFINED
>   mfg_part_no  | character varying
>   unit | USER-DEFINED
>
>
> I need to return the name, and data type of each column for the specified
> view.
>

Use the pg_catalog schema “tables” directly instead of the SQL standard
information_schema view.  The later doesn’t provide detail of
PostgreSQL-specific features by definition.

David J.


Re: Should I use JSON?

2020-05-21 Thread David G. Johnston
On Thu, May 21, 2020 at 8:37 AM stan  wrote:

> So here is the question, should I just manually parse this data, as I have
> been doing to insert into appropriate entities into the database? Or
> should I
> insert the JSON data, and use some queries in the database to populate my
> tables from the JSON tables?
>

The simplest thing that works is usually a good starting point.

I consider it a requirement that I can get to the original data as supplied
by the vendor.  So, if you are capturing that elsewhere, going directly to
structured tables is sufficient.  If you don't have any other place where
that gets saved I would save it to a table first then convert it to the
final structured tables.

My current setup is using "jq" to perform an intermediate transform of the
source json to table-specific files and then using psql and jsonb_to_record
insert the json data into the tables.

David J.


Re: Doubt in pgbouncer

2020-10-02 Thread David G. Johnston
On Thursday, October 1, 2020, Fabio Pardi  wrote:

> Hi Rama,
>
> On 02/10/2020 01:42, Rama Krishnan wrote:
>
> Hi Friends,
>
> By using pg bouncer can we split read and queries
>
>
> pgbouncer is just a connection pooler.
>
> The logic where to send the reads and where the writes, should be in our
> application.
>

Seems as if pgPool is at least worth considering...

David J.


Re: UUID generation problem

2020-10-05 Thread David G. Johnston
On Monday, October 5, 2020, James B. Byrne  wrote:

>
> idempiere(5432)=# alter role "idempiere_dbadmin" set search_path =
> 'adempiere,
> public';
> ALTER ROLE
> idempiere(5432)=#  select current_schemas(true);
>  current_schemas
> -
>  {pg_catalog}
> (1 row)
>
> This does not look like the ALTER statement had any effect.  Am I missing
> a step?
>

Whenever the role subsequently starts a new session, the specified value
becomes the session default, overriding whatever setting is present in
postgresql.conf or has been received from the postgres command line. This
only happens at login time; executing SET ROLE
 or SET SESSION
AUTHORIZATION
 does
not cause new configuration values to be set.

https://www.postgresql.org/docs/13/sql-alterrole.html

David J.


Re: UUID generation problem

2020-10-05 Thread David G. Johnston
On Monday, October 5, 2020, James B. Byrne  wrote:

>
>
> I am so confused by this.  I tried to do this:
>
> [root@accounting-2 ~ (master)]#  psql -E --dbname=idempiere
> --username=idempiere_dbadmin --host=localhost
> Password for user idempiere_dbadmin:
> psql (11.8)
> Type "help" for help.
>
> idempiere(5432)=# alter role "idempiere-dbadmin" set search_path =
> 'adempiere,
> pg_catalog, public';
> ERROR:  role "idempiere-dbadmin" does not exist
> idempiere(5432)=#
>
> If the role idempiere-dbadmin does not exist then how did I connect to the
> idempiere database?
>

Underscore versus hyphen.

David J.


Re: Writing WAL files

2020-10-05 Thread David G. Johnston
On Monday, October 5, 2020, Robert Inder  wrote:

> But the change Adrian Klaverd highlighted suggests that this is
> deliberately no longer the case,
> and I am left wondering what it does, in fact do/mean now.
>

“If no WAL has been written since the previous checkpoint, new checkpoints
will be skipped even if checkpoint_timeout has passed. ‘

https://www.postgresql.org/docs/13/wal-configuration.html

David J.


Re: Writing WAL files

2020-10-04 Thread David G. Johnston
On Sunday, October 4, 2020, Robert Inder  wrote:

> than shipping an empty file every few minutes?
>

The file is not empty.  We’re talking 16 megabytes in a default setup...

David J.


Re: database shutting down

2020-10-19 Thread David G. Johnston
On Monday, October 19, 2020, Atul Kumar  wrote:

> Hi,
>
> I am configuring repmgr, so in postgresql.conf when i changed the
> parameter share_preload_libraries='repmgr', my database server is not
> starting.
>
> When i comment that parameter, database server started.
>
>
> Please help why that parameter is stopping database to start.
>
>
You might get better results if you use the support channels for the
project as noted in its documentation.

David J.


Re: Gurjeet Singh Index Adviser User Interface

2020-10-09 Thread David G. Johnston
On Fri, Oct 9, 2020 at 5:20 PM Yessica Brinkmann <
brinkmann.yess...@gmail.com> wrote:

> I am using Postgresql 8.3.23, I really use this version because the Index
> Adviser only works with this version of Postgresql.
>

I suggest first figuring out whether you are able to successfully install
the current PostgreSQL Server (git master branch) on a current Linux
release (if you really want to go Windows feel free but there is less help
to be had there.)  Until you can get that to work you should not proceed
any further on attempting to modify PostgreSQL server.

Then, instead of trying to get ancient PostgreSQL server code running on
modern hardware, you should focus your attention on getting ancient
third-party modifications to the PostgreSQL server code to work on the
modern PostgreSQL server.

As an aside, I noticed the "USE_PGXS = 1", it didn't exist back in the 8.3
days.

As far as I can see, in a limited read of the readme summary, all of the
relevant code needs to exist within the PostgreSQL server source tree, the
pg_advise_index being placed in the contrib section just like, for example,
hstore or pg_prewarm (which I think has a command line interface).  Then
you modify, build and install the server and the modifications are fully
incorporated as core+contrib code.  This seems all quite straight-forward,
with plenty of examples to copy from, for dealing with the overall
structural aspects of the codebase and build/install processes.  Which
brings me back to first understanding how the unaltered system works before
trying to make alterations.  You may choose to learn that using an 8.3
server but you will probably find little help if you go that route.

David J.






David J.


Re: Strange behavior

2020-10-10 Thread David G. Johnston
On Sat, Oct 10, 2020 at 9:13 AM Olivier Leprêtre 
wrote:

This has nothing to do with pgAdmin, or any other client interface.

In other words, a wrong query returns a valid result. This happens because
> v1 is a column from test1, (select vx from test2) will return an error as
> expected.
>

https://wiki.postgresql.org/wiki/FAQ#Why_doesn.27t_PostgreSQL_report_a_column_not_found_error_when_using_the_wrong_name_in_a_subquery.3F
I am pondering trying to get the FAQ entry incorporated into the actual
documentation.

David J.


Re: pgbouncer installation example (Step by step)

2020-10-14 Thread David G. Johnston
On Wed, Oct 14, 2020 at 10:08 AM Atul Kumar  wrote:

> Please share a clean example of installing, configuring and testing
> pgBouncer.
>
> Your official links are not organized so I need an example of
> PgBouncer with organized steps.
>

As I said on your exact same posting to the -admin list; not the right
place and not enough information provided.

David J.


Re: passing linux user to PG server as a variable ?

2020-08-17 Thread David G. Johnston
On Mon, Aug 17, 2020 at 5:46 PM David Gauthier 
wrote:

> Users will connect to the DB and then update a table using SQL at the
> prompt.  And I want a post update trigger to identify who (linux user on
> the client side) just made that change.I was sort of hoping that this 8
> character string (the linux user id) could be passed from client -> server
> as a parameter that is set on the user/client side, perhaps using that "-v"
> option, which could somehow be passed along to the server.
>

If you are giving a user a direct connection to the DB so they can run SQL
they should have their own individual credentials.

Regardless, if you rely on runtime variables there is no way to prevent the
value of those variables from being changed by the user.

David J.


Re: passing linux user to PG server as a variable ?

2020-08-17 Thread David G. Johnston
On Mon, Aug 17, 2020 at 12:53 PM David Gauthier 
wrote:

> Looking at psql command line options, I see "-v" (lowercase) which is
> described as...
>
> -v assignment
> --set=assignment
> --variable=assignment
>
> Perform a variable assignment, like the \set meta-command. Note that you
> must separate name and value, if any, by an equal sign on the command line.
> To unset a variable, leave off the equal sign. To set a variable with an
> empty value, use the equal sign but leave off the value. These assignments
> are done during a very early stage of start-up, so variables reserved for
> internal purposes might get overwritten later.
> So I tried that without success.  "-v sysinfo.osuser=foo" failed the
> connect with..."psql: could not set variable "sysinfo.osuser""
>
> Next I tried..."-v osuser=foo"This didn't fail the connect, but once I
> got in..."show osuser" gave... "ERROR:  unrecognized configuration
> parameter "osuser""
>
The part of the description that says "like the \set meta-command" means
you need to read its description as well.  There, and through links
therein, you will learn that what you are creating is a variable within the
psql client itself, not on the server.  Usage of that client-side variable
is documented.  As long as you don't need the variable on the server, and
oftentimes you do not, then this feature will work just fine for you.

The SHOW SQL command (and other documented options[1]) is a server command
and inspects server variables.  If you really need to create one of those
in the current session it may be possible - though I believe you have to
use a namespace prefix (i.e., your sysinfo.osuser) to get the system to
recognize a user-created variable name.  There is some work on improving
things in this area.  Though worse case you can just stick the desired
value into a temporary table and maybe create some function wrappers to
modify/access it.

David J.

[1] https://www.postgresql.org/docs/devel/config-setting.html


Re: passing linux user to PG server as a variable ?

2020-08-17 Thread David G. Johnston
On Mon, Aug 17, 2020 at 5:46 PM David Gauthier 
wrote:

> >> You lass in the $USER to you client software where it executes a
> post-connect hook SQL script populating a temp table with that value,
> usually via a function.
>
> A "post-connect hook SQF script" ?
> My (limited) understanding of this is that once you connect, you're on the
> server and everything on the client side (like $USER) is no longer in play.
> But a "post-connect hook SQF script" sounds like something you would
> run unconditionally after the connect which would then create/populate the
> temp table.
>
> The problem is that I need to do this outside of an app which could run
> something like that.  Users will connect to the DB and then update a table
> using SQL at the prompt.  And I want a post update trigger to identify who
> (linux user on the client side) just made that change.I was sort of
> hoping that this 8 character string (the linux user id) could be passed
> from client -> server as a parameter that is set on the user/client side,
> perhaps using that "-v" option, which could somehow be passed along to the
> server.  But from what you said earlier, that only exists on the client
> side.
>
> Is there any mechanism for a client connect request to pass a variable
> like this to the server which it then could read on the server side?
>

The following link details what you can provide via libpq - which is what
psql exposes:

https://www.postgresql.org/docs/13/libpq-connect.html#LIBPQ-PARAMKEYWORDS

The item of note here is "options" - which can be set directly on the
connection string or passed in from the environment via PGOPTIONS

Note that in theory Customized Options can be passed this way:

https://www.postgresql.org/docs/13/runtime-config-custom.html

But as I've said this particular usage for customized options is something
I am unfamiliar with and is possibly not workable if your description of
the attempt is accurate.

Otherwise yes, before handing an open session back to the caller you will
want to run some SQL against that connection that sets up the environment
in the way you desire.  If this isn't something you can arrange then you
should probably just give up on the idea of having the server be aware of
unrelated O/S level identification provided out of band and just give each
user their own login and then inspect current_user or session_user.

David J.


Re: passing linux user to PG server as a variable ?

2020-08-17 Thread David G. Johnston
On Monday, August 17, 2020, David Gauthier  wrote:

> OK, trying to piece together something that might work but I don't see the
> pieces falling into place.
> From the link you provided...
>
> "The most fundamental way to set these parameters is to edit the file
> postgresql.conf"
> So I'm fine with asking our IT guys to stick some lines in there for us.
> But will the thing that executes the file understand what $USER is ?  Will
> this work...   "osuser = $USER"
>

This does not seem like something you’d setup at the server configuration
level...and no, writing osuser=$USER is going to be unintelligible to the
server.


>
> I tried this sort of thing through $PGOPTIONS...
> setenv PGOPTIONS "-c 'osuser=$USER'"
> But when I go to connect...
> psql: FATAL:  unrecognized configuration parameter "'osuser"
>

> I can avoid the error by just throwing a namespace in there...
>

Expected


> atletx7-reg036:/home/dgauthie[ 214 ] --> setenv PGOPTIONS "-c
> 'os.user=$USER' "
> But once in, "show os.user" is undefined.
>

Not sure, though maybe start with constants for values to isolate the where
info is being lost.  I’m not absolutely positive that PGOPTIONS will even
work here and even if it does that method has some limitations if you want
to use things like connection poolers.  It is, however, the existing ideal
way to accomplish the goal of having the connection pre-establish a server
GUC at startup without having to deal with SQL.


>
> I'm fine with a temp table approach, but don't really know where/how to
> create it in terms of pg sys files, init scripts or env vars like
> PGOPTIONS.
>

You would interact with it using pure SQL.  The how/where depends heavily
on your environment.  You lass in the $USER to you client software where it
executes a post-connect hook SQL script populating a temp table with that
value, usually via a function.

David J.


Re: import XML

2020-08-18 Thread David G. Johnston
On Tue, Aug 18, 2020 at 8:27 AM PASCAL CROZET <
pascal.cro...@qualis-consulting.com> wrote:

> I want to import XML file into PG database table.
> I've find functions to get the XML content of a cell after imported an XML
> file with the pg_get_file function.
> But, I want to explode the XML content to colums. How can I do this ?
>

In short, you need to import the content as a document value first then use
functions to explode that document.

You should find the documentation helpful.

https://www.postgresql.org/docs/10/functions-xml.html

Specifically, "xmltable"

Keep in mind that "context as a document value" can just be a literal.

David J.


Re: Inline count on a query

2020-08-19 Thread David G. Johnston
On Wednesday, August 19, 2020, Laura Smith <
n5d9xq3ti233xiyif...@protonmail.ch> wrote:

> Hi,
>
> Let's say we've got a fairly basic table :
>
> create table networks (
> lan_id text not null,
> net_id text not null,
> port_id text not null
> );
> create index net_uniq on networks(lan_id,port_id);
>
> The query conundrum I am facing is that I need to add metadata to the
> output of the query that indicates the count of ports a given net has on a
> lan.
>
> So, for example, given :
> insert into networks(lan_id,net_id,port_id) values('L1','N1,'P1');
> insert into networks(lan_id,net_id,port_id) values('L1','N1,'P2');
>
> The metadata count would be 1, 2 (because 'N1' has 'P1' and 'P2' on 'L1').
>
> Is there a sensible way to query this without stressing out Postgres too
> much ?  I'm guessing a CTE of some sort ?
>
>
Suggest you provide your desired output in table format, and show “the
query” that you mention.

David J.


Re: Inline count on a query

2020-08-19 Thread David G. Johnston
On Wed, Aug 19, 2020 at 8:19 AM Laura Smith <
n5d9xq3ti233xiyif...@protonmail.ch> wrote:

> On Wednesday, 19 August 2020 15:09, David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
> > On Wednesday, August 19, 2020, Laura Smith <
> n5d9xq3ti233xiyif...@protonmail.ch> wrote:
> >
> > > Hi,
> > >
> > > Let's say we've got a fairly basic table :
> > >
> > > create table networks (
> > > lan_id text not null,
> > > net_id text not null,
> > > port_id text not null
> > > );
> > > create index net_uniq on networks(lan_id,port_id);
> > >
> > > The query conundrum I am facing is that I need to add metadata to the
> output of the query that indicates the count of ports a given net has on a
> lan.
> > >
> > > So, for example, given :
> > > insert into networks(lan_id,net_id,port_id) values('L1','N1,'P1');
> > > insert into networks(lan_id,net_id,port_id) values('L1','N1,'P2');
> > >
> > > The metadata count would be 1, 2 (because 'N1' has 'P1' and 'P2' on
> 'L1').
> > >
> > > Is there a sensible way to query this without stressing out Postgres
> too much ?  I'm guessing a CTE of some sort ?
> >
> > Suggest you provide your desired output in table format, and show “the
> query” that you mention.
> >
> > David J.
>
>
> If I knew what "the query" was, I wouldn't be posting here.  ;-p
>

You implied that there was some existing query to which you are trying to
add metadata.

The only thing I've managed to come up with so far is adding count(*) over
> (partition by digest(lan_id||net_id,'sha256')) to my query, but that
> obviously gives the total count, not the ongoing incremental count.
>

If you want order to matter you need to add an ORDER BY to the window
specification, probably will the ROW * PRECEDING * FOLLOWING modifier as
well.  Though there is nothing in your original formulation that suggests
you cared about an "ongoing incremental count" so we're back to my
insistence you better formulate your problem statement and/or actually
provide the output needed for a given set of inputs even if you cannot put
together a working query that at least gets you close to that output.

David J.


Re: Interpolation problem - pg 12.4 - full correct version!

2020-08-19 Thread David G. Johnston
On Wed, Aug 19, 2020 at 11:51 AM Pól Ua Laoínecháin  wrote:

>
> I think my *MAJOR* problem is that I've developed what is,
> essentially, a totally brute force approach - and this simply won't
> work at the scenario becomes more complex - take a look at the CASE
> statement - it's horrible and would only become exponentially worse as
> the number NULLs rises.
>
> So, my question is: Is there a recognised technique (using SQL only,
> not PL/pgSQL - soutions based on the latter are easy to find) whereby
> I can do a basic Linear Interpolation?
>
>
I don't have a recognized technique, nor care to ponder one right now, but
what you've described would best be done in pure SQL using WITH RECURSIVE,
which provides an iterative approach to SQL result building.  Which is more
commonly done in a procedural language.  The algorithm you describe is an
iterative algorithm and so I'm wondering why place the arbitrary
restriction on using pure SQL when it likely wouldn't provide a very
readable nor performant solution relative to a procedural (pl/pgsql or
otherwise) one?

David J.


Re: Substitute Variable in select query

2020-08-24 Thread David G. Johnston
On Monday, August 24, 2020, harish supare  wrote:

> Hi Team,
>
> Would like to know what the substitute/input variable available in psql.
>
>
> Oracle we use &  - select a, b , c from table where a like 
>
> Is there an alternative in psql?
>
>
> Colon - read the documentation, psql section, for the exact syntax.

David J.


Re: Substitute Variable in select query

2020-08-24 Thread David G. Johnston
On Mon, Aug 24, 2020 at 7:21 AM harish supare 
wrote:

> thanks for the reply David.
>
> In case of Colon I need to set the variable first, my requirement is my
> select query should prompt for the input.
>
>
Please don't top-post.

psql does not have a feature that will prompt users during the execution of
a query.  It does, as documented, have a "\prompt" meta-command though.

David J.


Re: I'm surprised that this worked

2020-09-22 Thread David G. Johnston
On Tue, Sep 22, 2020 at 6:34 PM raf  wrote:

> Hi,
>
> I just wrote a query that I didn't expect to work but I
> was pleasantly surprised that it did. It looked
> something like this:
>
>   select
> a.aaa,
> c.ccc,
> d.ddd1,
> d.ddd2
>   from
> tbla a,
> tblb b,
> tblc c,
> funcd(c.id) d
>   where
> a.something = something and
> b.something = a.something and
> c.something = b.something
>
> How does it know which c.id to use for the function
> without going all cartesian product on me?


Using the comma-separated from syntax doesn't force the planner to perform
a full multi-relation cartesian join (though conceptually that is what
happens) - it still only joins two relations at a time.  After it joins a,
b, and c it joins each row of that result with all of the rows produced by
evaluating funcd(c.id).

>From the SELECT docs for LATERAL:

"When a FROM item contains LATERAL cross-references, evaluation proceeds as
follows: for each row of the FROM item providing the cross-referenced
column(s), or set of rows of multiple FROM items providing the columns, the
LATERAL item is evaluated using that row or row set's values of the
columns. The resulting row(s) are joined as usual with the rows they were
computed from. This is repeated for each row or set of rows from the column
source table(s)."

That said, the planner would be within its rights to indeed evaluate
funcd for every single row in tblc - applying c.something=b.something to
the final result would still cause those rows from funcd where the
attribute something for the given c.id matches the where clause filter to
be excluded.

I was sure I'd done something similar once that
> (sensibly) didn't work, and I needed a loop to call the
> function in, but I might be thinking of something in an
> outer join's "on" clause. Does that make sense?
>

You probably tried it before we added LATERAL to our engine.

David J.


Re: Rows removed on child table when updating parent partitioned table.

2020-10-01 Thread David G. Johnston
The convention on these lists is to inline or bottom-post, please do not
top-post.

On Thu, Oct 1, 2020 at 10:41 AM Jonathan Strong 
wrote:

> I've been away from coding for several years, but dusting off my chops and
> getting back up to speed with PostgreSQL (love it!). So please forgive me
> if my early answers here come off as naive. But my understanding of this
> suggests that you shouldn't be using "update" on a serial field.
>

Yes Jonathan, your present understanding is flawed.  The OP has provided a
self-contained simple test case for the problem at hand - which even if not
"best practice" is indeed valid to do and demonstrates the problem quite
clearly.  Without actually testing it out I would say that this is likely
indeed an oversight in the partition row movement feature - it didn't take
into account the ON UPDATE/ON DELETE clause.

Adding Robert Hass who committed the row movement feature [1].

We document on the UPDATE reference page that such an update is performed
as a DELETE + INSERT.  Given that implementation detail, the observed
behavior is what one would expect if no special consideration has been
given to make row movement between partitions preserve (via deferred
evaluation), or recreate the foreign key relationship.

For now I would say you should consider the two features incompatible; and
we need to update the documentation to reflect that reality more directly,
barring a solution being proposed, and hopefully back-patched, instead.  I
concur with the observation that one would expect these two features to
interact better with each other and think it could possibly be done as a
bug fix for the POLA violation.

David J.

[1]
https://github.com/postgres/postgres/commit/2f178441044be430f6b4d626e4dae68a9a6f6cec


Re: temp table same name real table

2020-10-01 Thread David G. Johnston
On Wed, Sep 30, 2020 at 7:41 AM Tom Lane  wrote:

> If you really really need to do this, I'd counsel using EXECUTE to
> ensure no caching happens.  But I concur with Michael that it's
> fundamentally a bad idea.
>

Agreed, though the documentation seems a bit loose here.  The fact that the
temp table hides the permanent one is a side-effect of pg_temp being placed
first in the default search_path.  If it is explicitly placed last the
permanent table would be found again.

Adding a reminder that search_path searching happens only during new plan
creation (even if we don't generally cover caching implementation in
detail, though I didn't look around for this one) seems like a good value.

I propose the following:

diff --git a/doc/src/sgml/ref/create_table.sgml
b/doc/src/sgml/ref/create_table.sgml
index 087cad184c..a400334092 100644
--- a/doc/src/sgml/ref/create_table.sgml
+++ b/doc/src/sgml/ref/create_table.sgml
@@ -171,8 +171,9 @@ WITH ( MODULUS numeric_literal, REM
   If specified, the table is created as a temporary table.
   Temporary tables are automatically dropped at the end of a
   session, or optionally at the end of the current transaction
-  (see ON COMMIT below).  Existing permanent
-  tables with the same name are not visible to the current session
+  (see ON COMMIT below).  The default
+  search_path includes the temporary schema first and so identically
+  named existing permanent tables are not chosen for new plans
   while the temporary table exists, unless they are referenced
   with schema-qualified names. Any indexes created on a temporary
   table are automatically temporary as well.

David J.


Re: How to write such a query

2020-09-18 Thread David G. Johnston
On Fri, Sep 18, 2020 at 1:18 PM Igor Korot  wrote:

> As I said - Access does it without changing the query internally (I
> presume).
>
> I want to do the same with PostgreSQL.
>

I suspect they basically do the equivalent of:

UPDATE ... WHERE CURRENT OF ;

https://www.postgresql.org/docs/12/sql-update.html

David J.


Re: Detecting which columns a query will modify in a function called by a trigger

2020-10-02 Thread David G. Johnston
On Tue, Mar 3, 2020 at 4:19 PM David G. Johnston 
wrote:

> On Tue, Mar 3, 2020 at 4:11 PM Adrian Klaver 
> wrote:
>
>> On 3/3/20 3:06 PM, David G. Johnston wrote:
>> > On Tue, Mar 3, 2020 at 3:48 PM Adrian Klaver > > <mailto:adrian.kla...@aklaver.com>> wrote:
>> >
>> > The link was for automatically updateable views. If you want to do
>> > something more involved then see:
>> >
>> >
>> https://www.postgresql.org/docs/12/rules-views.html#RULES-VIEWS-UPDATE
>> >
>> >
>> > CREATE TRIGGER works with views; I usually see recommendations to start
>> > there and avoid rules if at all possible.
>>
>> The above suggests triggers then rules.
>>
>
> Yeah, I see the buried in there.  The link itself and the page itself is
> something like 95% rules coverage so it still seems worth pointing out even
> in hindsight.
>
> Maybe add a link to the CREATE TRIGGER section in there...
>

Minor doc patch for this attached for consideration.

David J.


v1-notify-doc-fixup.patch
Description: Binary data


Re: Profile Creation

2020-10-02 Thread David G. Johnston
On Fri, Oct 2, 2020 at 1:43 PM Brajendra Pratap Singh <
singh.bpratap...@gmail.com> wrote:

> How can we create a user profile in open postgresql db?
>
?

CREATE TABLE user_profile (...);
INSERT INTO user_profile VALUES (...);

David J.


Re: Rows removed on child table when updating parent partitioned table.

2020-10-02 Thread David G. Johnston
On Fri, Oct 2, 2020 at 9:11 AM Eduard Català 
wrote:

> If no one else gives an opinion I will open a bug for at least, force an
> update of the documentation.
>

It's been seen and begun to be discussed over on -hackers [1].

[1]
https://www.postgresql.org/message-id/flat/CA%2BHiwqFvkBCmfwkQX_yBqv2Wz8ugUGiBDxum8%3DWvVbfU1TXaNg%40mail.gmail.com

David J.


Re: suggestion the process to sync the data from PROD DB to Staging environment

2020-05-27 Thread David G. Johnston
On Wed, May 27, 2020 at 9:31 AM postgann2020 s 
wrote:

> Thanks, David,
>
> Please find the environment details.
>
> Environment:
> PROD:
> OS: RHEL 7.1
> Postgres: 9.5.15
>
> Staging:
> OS: RHEL 7.1
> Postgres: 9.5.15
>

Ok...not particularly helpful though I do see you are not keeping up with
minor and major releases.

Maybe the terms metrics and goals would have been better...like how big is
the database and what kind of network do the two machines exist in and how
would they communicate data from one to the other...

David J.


Re: suggestion the process to sync the data from PROD DB to Staging environment

2020-05-27 Thread David G. Johnston
On Wed, May 27, 2020 at 8:56 AM postgann2020 s 
wrote:

> Could someone please suggest the process to *sync the data from PROD DB
> to the Staging environment* with minimal manual intervention or
> automatically.
>

Read up on the general purpose "bash" scripting language, the PostgreSQL
"pg_dump" and "pg_restore" commands, "ssh", and "cron".

"cron" and "bash" provide for the "automatically" requirement.

It is possible to assemble something functional with those tools.  Whether
it will actually work in your specific situation is impossible to say since
you provide zero information about your environment.

David J.


Re: SELECT query results are different depending on whether table statistics are available.

2020-05-27 Thread David G. Johnston
On Wed, May 27, 2020 at 8:09 PM James Brauman 
wrote:

> -- Run select query (involving several CTEs).
> SELECT ...;
>
> I haven't generated a minimal test case yet, but I did notice that if
> all CTEs in the SELECT query are defined using AS NOT MATERIALIZED the
> results are always the same regardless of whether the table has been
> ANALYZED yet.
>
> Could anyone share knowledge about why this is happening?
>

A likely scenario is you are missing an ORDER BY in a location where you
are depending on deterministic row ordering and its changing out from
underneath you.

David J.


Re: Query returns no rows in pg_basebackup cluster

2020-05-25 Thread David G. Johnston
On Sunday, May 24, 2020, Andrus  wrote:

> Hi!
>
> Backup in created in Windows from Linux server using pg_receivewal and
>>> pg_basebackup .
>>> Can this backup used for PITR in Linux ?
>>>
>> No.  Physical copies need to be based on the same platform.  If you
>> wish to replicate a cluster without any platform, architecture or even
>> not-too-many major version constraints, there is also logical
>> replication available since v10.
>>
>
> Will logical replication also allow two modes:
>  1. PITR recovery can used if needed
>  2. Hot standby: User databases in both clusters contain same data.
>
>
Why are you spending so much effort on this Window/Linux hybrid setup?  Get
yourself another Linux server and setup physical replication.  It sounds
like it will exactly meet your requirements and you will waste more time
and money working out alternatives than the server would cost.

David J.


Re: Query returns no rows in pg_basebackup cluster

2020-05-24 Thread David G. Johnston
On Sun, May 24, 2020 at 4:10 PM Michael Paquier  wrote:

> On Fri, May 22, 2020 at 09:53:37AM +0300, Andrus wrote:
> > Backup in created in Windows from Linux server using pg_receivewal and
> pg_basebackup .
> > Can this backup used for PITR in Linux ?
>
> No.  Physical copies need to be based on the same platform.  If you
> wish to replicate a cluster without any platform, architecture or even
> not-too-many major version constraints, there is also logical
> replication available since v10.
>

Does the O/S that the client software runs on really affect this?  I would
expect that you could store the offline files anywhere.  As long as the
architecture your original server is on and the one you are restoring to
are the same the restored server should work.  They are just bytes until a
server interprets them, no?

David J.


Re: problem with self built postgres 9.0.9

2020-05-29 Thread David G. Johnston
On Fri, May 29, 2020 at 7:08 AM Gabriele Bulfon  wrote:

> Amazing! Rebuilt without -O and it worked like a charm!
> Thanks, at the moment I need to stick to 9.0.9 on this machine to be able
> to reuse the same database files.
>
>
Just to be thorough.  You can update to 9.0.23 (i.e., build against the tip
of the 9.0.x set of branches) and still use the same database files.  For
all versions (starting with v10 the version has only two components, not
three) changing the final digit in the version is a code-only change.

There is no material difference to risk for building 9.0.23 against the
newer O/S and compiler, etc, than it is to build 9.0.9 against the newer
O/S and compiler, etc.  You assumed basically maximum risk when you choose
to keep using version 9.0 and upgraded everything else around it to
versions that were possibly never tested against it - and if they were
tested it is more likely they were tested against 9.0.23 as it is years
more current.

David J.


Re: GPG signing

2020-05-26 Thread David G. Johnston
On Tuesday, May 26, 2020, Marc Munro  wrote:

> I need to be able to cryptographically sign objects in my database
> using a public key scheme.
>
> Any other options?  Am I missing something?
>

This feels like it should be an application (middleware...) concern, not
the database proper.  i.e., store previously signed data into a normal text
or bytea field.

That said there is:

 https://www.postgresql.org/docs/12/pgcrypto.html#id-1.11.7.34.7

David J.


Re: How to get the OID of a view

2020-05-22 Thread David G. Johnston
On Fri, May 22, 2020 at 9:15 AM stan  wrote:

> I am trying to write a query to return the names, and data types of all the
> columns in a view. It has been pointed out to me that the best approach
> would be using pg_catalog. OK, so I found pg_view, which I can get the
> names
> of a the views from and pg_attribute which can give me the column names,
> but it looks like i need to join this on OID, and pg_table does not have
> that data.
>
>
The table you want is pg_class:

https://www.postgresql.org/docs/12/catalog-pg-class.html

If the system views listed in [1] don't provide you what you need you need
to fall-back to the system tables listed in [2].

[1] https://www.postgresql.org/docs/12/views-overview.html
[2] https://www.postgresql.org/docs/12/catalogs-overview.html

David J.


Re: query, probably needs window functions

2020-05-22 Thread David G. Johnston
On Fri, May 22, 2020 at 12:38 PM Michael Lewis  wrote:

> I believe something like this is what you want. You might be able to do it
> without a sub-query by comparing the current name value to the lag value
> and null it out if it's the same.
>

This.  I misread the question.  You might also consider just outputting one
row per person and output the related phone numbers using
string_agg(phone.number, E'\n')

David J.


Re: query, probably needs window functions

2020-05-22 Thread David G. Johnston
On Friday, May 22, 2020, Scott Ribe  wrote:

> given, let's say:
>
> create table person (id int not null, name varchar);
> create table phone (id int not null, person_id int not null, number
> varchar);
>
> select person.*, phone.number from person join phone on (person.id =
> phone.person_id) order by...
>
> How would you get results where only the first row for a person was filled
> in, with rest of that person's phones showing blanks for those columns? I'm
> guessing that window functions provide this capability, but I don't know
> how.
>
>
If a left join doesn’t give you the answer you want you should probably
provide exact input and output data that you are working with/toward.

David J.


Re: Suggestion on index creation for TEXT data field

2020-05-21 Thread David G. Johnston
On Thu, May 21, 2020 at 7:45 AM postgann2020 s 
wrote:

> >And what type of data exactly are we talking about.  ==> Column is
> stroing GIS data.
>

GIS data isn't really TEXT and isn't a core datatype of PostgreSQL so this
is maybe better posted to the PostGIS community directly...

David J.


Re: Suggestion on index creation for TEXT data field

2020-05-21 Thread David G. Johnston
On Thu, May 21, 2020 at 7:28 AM postgann2020 s 
wrote:

> which is having an avg width of 149bytes.
>

The average is meaningless if your maximum value exceeds a limit.

2. What type of index is the best suited for this type of data?.
>

And what type of data exactly are we talking about.  "TEXT" is not a useful
answer.

If the raw data is too large no index is going to be "best" -  as the hint
suggests you either need to drop the idea of indexing the column altogether
or apply some function to the raw data and then index the result.

David J.


Re: When to use PARTITION BY HASH?

2020-06-02 Thread David G. Johnston
On Tue, Jun 2, 2020 at 10:17 AM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:

> That *might* turn out to be the case with a small number of distinct
> values in the partitioning column(s), but then why rely on hash
> assignment instead of using PARTITION BY LIST in the first place?
>
> [1] https://www.postgresql.org/docs/12/ddl-partitioning.html
>

Why the cross-posting? (-performance is oriented toward problem solving,
not theory, so -general is the one and only PostgreSQL list this should
have been sent to)

Anyway, quoting the documentation you linked to:

"When choosing how to partition your table, it's also important to consider
what changes may occur in the future. For example, if you choose to have
one partition per customer and you currently have a small number of large
customers, consider the implications if in several years you instead find
yourself with a large number of small customers. In this case, it may be
better to choose to partition by HASH and choose a reasonable number of
partitions rather than trying to partition by LIST and hoping that the
number of customers does not increase beyond what it is practical to
partition the data by."

Hashing does indeed preclude some of the benefits and introduces others.

I suspect that having a hash function that turns its input into a different
output and checking for equality on the output would be better than trying
to "OR" a partition list together in order to combine multiple inputs onto
the same table.

David J.


Re: split_part for the last element

2020-10-23 Thread David G. Johnston
On Fri, Oct 23, 2020 at 8:47 AM Nikhil Benesch 
wrote:

> Is there another option I'm missing? Would there be interest in
> extending split part so that negative indices counted from the end, as
> in:
>
> split_part('foo bar baz', ' ', -1) -> 'baz'
>

Some thoughts:

I'm torn here because this would be the first usage of this concept in
PostgreSQL (I think).

Tangentially, I noticed that we have a "starts_with" function but no
corresponding "end_with".

It's been a while but there used to be a systemic inertia working
against adding minor useful functions such as these.

With the new documentation layout I would at least consider updating the
description for the normal functions with an example on how to formulate an
expression that works contra-normally, and in the case where there does
exist such a specialized function, naming it.

David J.


Re: Copy json from couchbase to postgres

2020-10-26 Thread David G. Johnston
On Mon, Oct 26, 2020 at 1:10 PM Rushikesh socha 
wrote:

> Any suggestions on copying .json which is exported from couchbase
> using cbexport json. I used copy command but most of them are throwing
> error.
>
> pgdocstore=# copy schaname.tablename from 'path/filename.json';
> ERROR:  invalid input syntax for type json
> DETAIL:  Token "H" is invalid.
>

I'd suggest making a self-contained posting including the data you are
trying to import and gloss over how it is being generated.  You've gotten
enough correct to get a json input syntax error so its the actual data that
seems to matter.

David J.


Re: Certficates

2020-08-10 Thread David G. Johnston
The convention on these lists is to inline or bottom-post.

On Mon, Aug 10, 2020 at 11:11 AM Martin Gainty  wrote:

> cant you use keytool ?
>

That wasn't the question, the OP already indicated they can do this
successfully in JDBC.

David J.


Re: Bytea Example

2020-08-13 Thread David G. Johnston
On Thursday, August 13, 2020, Naveen Kumar  wrote:

> Can someone please give me an example on byteA data type.
>
> 1. How to import a image/text file into Bytea data type.?
> 2. How to export the same?
>

At a simple level its no different than importing and exporting character
data using a “text/varchar” field or a number using an integer field.  For
bytea the content is just raw bytes instead of characters or numbers.

David J.


Re: Certficates

2020-08-10 Thread David G. Johnston
On Mon, Aug 10, 2020 at 10:54 AM Shankar Bhaskaran 
wrote:

> How does psql import the server certificate?
>

See:

https://www.postgresql.org/docs/12/libpq-envars.html

Namely the "PGSSL*" prefixed environment variables.

It works by default because both the server and client are usually
installed from the same source and the same default certificate files are
provided to each.

David J.


Re: Can't seem to mix an inner and outer join in a query and get it to work right.

2020-06-29 Thread David G. Johnston
On Monday, June 29, 2020, David Gauthier  wrote:

>
>sqf_id   | sqf_sl  |  as_cl  |
> wa_id |   type
>
> +---
> --+-+---+---
>
> * arazhang_20.06.28-20:59:06_soc_navi24_main@4404799 | 4404957 | 4405363
> |  2772 | autosmoke*
>
>  arazhang_20.06.28-20:59:06_soc_navi24_main@4404799 | 4404957 | 4405363
> |  2773 |
>
>  arazhang_20.06.28-20:59:06_soc_navi24_main@4404799 | 4404957 | 4405363
> |  2774 |
>
> * bgregory_20.06.29-09:46:49_raphael_main@1277530| 1277949 |
> |   |*
> (4 rows)
>
>
> dvm.workarea_env on
> dvdb-#   (sqf.sqf_runs.submitted_
> changelist=dvm.workarea_env.p4_changelist)
> dvdb-#*inner join* dvm.dvm_events on
> dvdb-#   (dvm.workarea_env.wa_id = dvm.dvm_events.wa_id
> and dvm.dvm_events.type = 'autosmoke')
> dvdb-# where sqf.sqf_Runs.submitted_shelvelist in (4404957,1277949);
>sqf_id   | sqf_sl  |  as_cl  |
> wa_id |   type
> +---
> --+-+---+---
>  arazhang_20.06.28-20:59:06_soc_navi24_main@4404799 | 4404957 | 4405363 |
>  2772 | autosmoke
> (1 row)
>
>
> Is there a way to retain the effect of that first outer join while
> enforcing that inner join ?
>

The quick non-testable answer is most likely.  I usually end up doing
trial-and-error and adding possibly unnecessary parentheses to force the
needed order of operations (or moving part of the join into an explicit
subquery, possibly using a CTE/WITH).  Right now it seems that the query is
forcing, via the inner join, a final output where the column
type=‘autosmoke’.  i.e., its doing the outer join first then the inner.
You seem to want the reverse.

David J.


Re: Different results from identical matviews

2020-07-02 Thread David G. Johnston
On Thu, Jul 2, 2020 at 8:06 AM Anders Steinlein  wrote:

> On Thu, Jul 2, 2020 at 3:55 PM David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Thursday, July 2, 2020, Anders Steinlein  wrote:
>>>
>>>
>>> I just wanted to add that we're on Postgres 12.3. This matview has been
>>> with us since 9.4 days, and we have not experienced any such issues before
>>> (could be customers who haven't noticed or reported it to us, of course...).
>>>  version
>>>
>>>
>>> -
>>>  PostgreSQL 12.3 (Ubuntu 12.3-1.pgdg18.04+1) on x86_64-pc-linux-gnu,
>>> compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit
>>>
>>
>>
>>  I concur that the determinism doesn’t seem like a problem - but not much
>> else does either.  As a shot in the dark does pg_depend show any
>> differences between the dependencies for the two views?
>>
>
> Could be worth checking, yes. Could you give me any guidance as to how to
> compare this? Never looked at pg_depend before -- which of the columns
> should have the oid for the matview I want to look up dependencies for?
>

It would be an educational/trial-and-error experience for me as well.  That
you found a difference in pg_rewrite.ev_action probably provides a more
fruitful avenue of attack though I'm inexperienced there as well.  I do
believe that inspecting pg_depend will also highlight whatever difference
you are seeing in the ev_action.  What tickles my curiosity is why that
difference (whatever it is, I haven't looked) isn't manifesting in the \d+
output for the materialized view.

David J.


Re: Different results from identical matviews

2020-07-02 Thread David G. Johnston
On Thursday, July 2, 2020, Anders Steinlein  wrote:

>
>
>> Thanks for the tip, but I'm having a hard time thinking that's the case,
>> seeing as I'm unable to trigger the wrong result no matter how hard I try
>> with a new definition/manual query. I've introduced random ordering to the
>> first CTE-clause (where the initial citext values comes from, and casing
>> thus could differ in some order) which doesn't change the result.
>>
>
> I just wanted to add that we're on Postgres 12.3. This matview has been
> with us since 9.4 days, and we have not experienced any such issues before
> (could be customers who haven't noticed or reported it to us, of course...).
>  version
>
> 
> -
>  PostgreSQL 12.3 (Ubuntu 12.3-1.pgdg18.04+1) on x86_64-pc-linux-gnu,
> compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit
>


 I concur that the determinism doesn’t seem like a problem - but not much
else does either.  As a shot in the dark does pg_depend show any
differences between the dependencies for the two views?

How did this migrate from 9.4 to 12?

It would be helpful if “Explain analyze refresh materialized view” were a
thing (is it?)

If you can backup and restore the existing database (basebackup is more
likely, but pg_dump would be more useful) and still observe the problem
then maybe I see hope for digging down into the cause.  Otherwise I’d limit
my decision to testing for the symptom with the solution being to rebuild
any problem views.

David J.


Re: Different results from identical matviews

2020-07-02 Thread David G. Johnston
On Thu, Jul 2, 2020 at 8:44 AM Tom Lane  wrote:

> A plausible explanation for how things got that way is that citext's
> equality operator wasn't in your search_path when you created the original
> matview, but it is in view when you make the new one, allowing that
> equality operator to capture the interpretation of USING.  Unfortunately,
> since the reverse-listing of this join is just going to say "USING
> (email)", there's no way to detect from human-readable output that the
> interpretation of the USING clauses is different.  (We've contemplated
> introducing not-SQL-standard syntax to allow flagging such cases, but
> haven't pulled the trigger on that.)
>

The citext extension seems to have been installed into the public schema as
well which could introduce the CVE-2018-1058 fix as a potential moving part.

It seems a bit odd though since the textual query does specify "DISTINCT
mails_contacts_opens.email::public.citext" so it does seem to be
search_path induced as the view couldn't exist if the extension was simply
missing not extension specific equality operator were present to match in
front of the default equality operator.  But then those casts also make me
question whether the source tables are defined using text instead of citext
in which case the joins using text equality would be expected and their
using citext equality in the new queries potentially suspect.

David J.


Re: Can't seem to mix an inner and outer join in a query and get it to work right.

2020-07-01 Thread David G. Johnston
The convention here is to bottom post or inline responses.

On Wed, Jul 1, 2020 at 9:51 AM David Gauthier 
wrote:

> Actually, I want the outer join first.  If it finds something, then move
> on to the inner join and filter out all those that don't join to a rec with
> 'autosmoke'.  But if the outer join does not connect to the workarea_env
> table, then just return what you have (the purpose of the outer join)
>
>>
>>>
So your final result - ignoring columns - is basically:
(sqf, (workarea, events))

where either the entire (workarea, events) is null, or if it is
non-null then workarea must also be non-null

Thus: ((workarea is left joined against events) with the whole thing left
joined against sqf).  And we are back to the join ordering precedence since
what you originally wrote was ((sqf, workarea), events).

In short - two outer joins; you can work out precedence either with
syntactic order or parentheses.

David J.


Re: BigSerial and txid issuance

2020-07-08 Thread David G. Johnston
On Wed, Jul 8, 2020 at 8:18 AM Yorwerth, Adam 
wrote:

> Is it possible for two transactions to interleave their issuance of these
> two variables?
>
>
>
> Is it possible for transaction 1 to be issued txid 1001 and offset 12 and
> transaction 2 to be issued txid 1002 and offset 11?
>

Given all of the disclaimers about serial value issuance you should assume
that it is possible.

David J.


Re: Basic question about structuring SQL

2020-07-07 Thread David G. Johnston
On Tue, Jul 7, 2020 at 4:41 AM Robert Inder 
wrote:

> So how should I structure my chunks of SQL so that I can have "safe"
> (all-or-nothing) blocks,
> AND use them from within one another?
>

While there are more advanced constructs that may aid here I would suggest
just following two rules:

top-level scripts handle transactions - usually just a single BEGIN/COMMIT
at the top/bottom of the script respectively.  These are scripts you are
allowed to name on the psql command line.
support scripts don't handle transactions - these are scripts you include
into the main script with \i or \ir

David J.


Re: Both side privileges

2020-07-13 Thread David G. Johnston
On Mon, Jul 13, 2020 at 4:42 PM Jean-Philippe Chenel 
wrote:

> Hi,
>
> I try to give userA privileges on userB objects and same thing to the
> userB, giving privileges on userA objects.
>
> Grant userB to userA; —ok
> Grant userA to userB; —error: role userB is already member of role userA
>

Create a "group role" that retains ownership and then add both users to
that group.

David J.


Re: some random() clarification needed

2020-07-14 Thread David G. Johnston
On Tue, Jul 14, 2020 at 8:15 AM Marc Millas  wrote:

> select id, prenom from prenoms where id=ceiling(random()*2582);
>
> expecting to get, allways, one line.
> But its not the case.
> around 15% of time I get 0 lines which is already quite strange to me.
> but 10% of time, I get a random number of lines, until now up to 4.
> even weirder (to me !)
>
> so, can someone please clarify ?
>
>
You are basically asking:

For each row in my table compare the id to some random number and if they
match return that row, otherwise skip it.  The random number being compared
to is different for each row because random() is volatile and thus
evaluated for each row.

David J.


Re: some random() clarification needed

2020-07-14 Thread David G. Johnston
Please don't top-post.  Inline (with trim) is better but at minimum
bottom-post.

On Tue, Jul 14, 2020 at 9:01 AM Marc Millas  wrote:

> Hi,
> your answer helps me understand my first problem.
> so, I rewrote a simple loop so as to avoid the "volatile" behaviour.
> (at least I was thinking I did... looks like I was wrong !)
> step by step loop:
> DO $$
> BEGIN
>   FOR counter IN 1..1000 LOOP
> begin
> declare
> id1 integer =ceiling(random()*2582);
> id3 date= '2000-01-01';
> id2 date;
> pren varchar;
> begin
> id2=id3 + (random()*7200)::integer;
> SELECT prenom FROM prenoms WHERE id=id1 into pren;
> INSERT INTO testparttransac (datenaissance, prenoms) values(id2,pren);
>   end;
> end;
> END LOOP;
> END; $$;
>
> I truncated the table, executed the loop with no errors, and expected that
> a select count(*)
> may answer 1000 !
> no.
> it varies, from less than 1000 (much less, something like 900)
> and more than 1000 (up to 1094)
>
> so... what s "volatile" in the loop ?
>

Everything...

You are setting id1 to the result of an expression inside the loop.
Everytime that statement gets executed within the loop a new random number
is produced.

I mean, even "id 3 date = '2000-01-01'" is repeatedly casting (I think) the
string to a date and assigning it to the variable even though that
statement overall is effectively immutable.

David J.


Re: Surprising connection issue

2020-07-14 Thread David G. Johnston
On Tue, Jul 14, 2020 at 8:25 AM David Gasa i Castell 
wrote:

> And my surprise went when I see the connection done while there is no user
> granted to connect the database...
>

https://www.postgresql.org/docs/12/ddl-priv.html

"""
PostgreSQL grants privileges on some types of objects to PUBLIC by default
when the objects are created. No privileges are granted to PUBLIC by
default on tables, table columns, sequences, foreign data wrappers, foreign
servers, large objects, schemas, or tablespaces. For other types of
objects, the default privileges granted to PUBLIC are as follows:
***CONNECT and TEMPORARY (create temporary tables) privileges for
databases;*** (emphasis mine)
 EXECUTE privilege for functions and procedures; and USAGE privilege for
languages and data types (including domains).
"""

David J.


Re: Convert hot_standby 9.4 postgresql into standalone server

2020-07-14 Thread David G. Johnston
On Tuesday, July 14, 2020, Julie Nishimura  wrote:

> Hello, we currently have 9.4 hot_standby master-slave pair. Going forward,
> we can keep only one server. How can I convert the system properly?
>

If you are keeping the primary you shouldn’t have to do anything.  The
absence of a secondary server shouldn’t impact the primary.  You might be
recording more detail to WAL than strictly necessary but that isn’t wrong
outright.

David J.


Re: survey: psql syntax errors abort my transactions

2020-07-02 Thread David G. Johnston
On Thu, Jul 2, 2020 at 8:54 AM Jeremy Schneider  wrote:

>
> https://www.postgresql.org/message-id/flat/CABTbUpiAOKZ405uArt8cJFtC72RhzthmvWETQK_6Qw0Ad-HquQ%40mail.gmail.com
>
> This thread on hackers actually seemed kindof short to me. Not nearly
> enough bike-shedding to call it a closed case.
>

Seemed about right:

"We should do this"
Yes
No - with a reason
No - with a reason
No - improve user education
No - emphatically
Yes - but ends ups deferring to the majority

No one else chooses to voice an opinion

The status quo prevailed since no-one chose to contribute further arguments
for change and the original patch was retracted.  What kind of
"bike-shedding" (which seems to be used incorrectly here) would you expect?

All I can speak for is personal usage but I don't find the current default
to be an issue.  I'm also generally opposed to changing this kind of
default even when I disagree with its current value.  If anything psql is a
bit too permissive by default IMO.  Default should be as safe as possible
even at the cost of user inconvenience - so that unknowledgeable people get
the most protection.  If options exist to trade safety for convenience that
is good - each user can make that trade-off for themselves and in the
process be aware of what exactly their decision entails.

David J.


Re: Feature suggestion: auto-prefixing SELECT query column names with table/alias names

2020-06-22 Thread David G. Johnston
On Sunday, June 21, 2020, Guy Burgess  wrote:
>
> a.id, a.title, b.id, b.title


You are missing some double-quotes there.

Of course, this can be achieved by avoiding the (often frowned-upon) SELECT
> * syntax in the first place and using explicit column names,


Or choose better, distinguishing, column names.


> Could there be an (admittedly non-standard) option to automatically prefix
> table/alias names to columns


The cost/benefit here seems quite unfavorable.

SQL is verbose and SQL doesn't make fields inseparable from the relations
they are attached to.  So while "object.name" makes sense in an
object-oriented world it doesn't make as much sense in SQL,
object.object_name does.

At least at the top level of query, when dealing with physical relations at
least, client software has the ability to inspect the result metadata and
for those output columns that are from relations it can lookup related
information by OID.  If anything I would restrict manipulation to this
subset of problem space.  What I'd want is some way to attach a "human
friendly" label to a column and have the server replace the output column
name for a physical column with that human readable label instead.  I'd
still have to alias derived values using "AS" but that's fine.

Tying this back to the original request, if we do focus on top-level tlist
names in most production cases you'd want "Object Name" instead of "
object.name" anyway - so absent something like I describe above you are
just back to writing:
SELECT object.name AS "Object Name"
instead of
SELECT *
and getting something besides "name" back through some behind the scenes
logic.

For ad-hoc queries I'd suggest that the incidence of desiring "SELECT *" is
considerably higher but that the query author has a considerably greater
propensity to know which name is which.

So, in short, this feels like something with a small but real audience but
a non-trivial design, let alone implementation, and has a large degree of
mitigation by adopting technology driven coding and naming standards.  For
production outputs it doesn't even do that great a job as final output
names should strive to be human friendly as opposed to the developer
friendly names used in data models.

David J.


Re: scram-sha-256 encrypted password in pgpass

2020-06-22 Thread David G. Johnston
On Mon, Jun 22, 2020 at 3:32 PM Pavan Kumar  wrote:

> Adrian, David,
>
> Thank you so much for the quick response.
>
> What would be the point of storing the encrypted password instead of the
> plaintext one?
> As per our organization security policies, we can 't keep any  passwords
> in plain text format.
> I am working on postgres + pgbouncer setup, tested pgbouncer 1.14 where we
> have support to use encrypted password in userlist,txt file. I am
> surprised why  pgpass is not supporting encrypted passwords.
>
>
Just use a long string of random letters, numbers, and symbols and say its
encrypted...

David J.


Re: Can the current session be notified and refreshed with a new credentials context?

2020-06-22 Thread David G. Johnston
On Mon, Jun 22, 2020 at 2:21 PM Ron  wrote:

> On 6/22/20 4:07 PM, AC Gomez wrote:
>
> Suppose you have the following scenario:
>
> 1: Call some function with a certain user and password
> 2: From inside that function, have several calls using DBLink
> 3: At some point during the running of that function a password rotation(a
> separate process) comes along and updates the session user password and the
> User Mappings with this new rotated password
> 4: Now there is a discrepancy between the password used when the session
> started and the password in the User Mappings
> 5: The result is that on the next DBLink call the main function will fail
> because the session is still running with the old password but we have
> changed the User Mappings.
>
> We have proven this by separating out every DBLINK call as its own new
> session and running password rotation in between dblink calls. Then things
> will work.
>
> My question: Is there a way to update or refresh the session with the new
> password that was rotated so that the main function keeps running
> seamlessly through all it's DBLink calls?
>
> If something like this is not available, then password rotation can *only
> run* when nothing else is running.
>
>
> I've not seen such a thing on *any* system.
>

I don't use DBLink but I tend to agree that as written this seems more
likely to be a user error type situation rather than an issue with the
feature.  You should probably provide a (minimally) reproducing script for
at least the client with annotations as to what is happening externally to
the script as it is being run.

IOW, you don't get to keep the function a black box while also saying it is
exactly the details of what is inside that function that is failing.

David J.


Re: scram-sha-256 encrypted password in pgpass

2020-06-22 Thread David G. Johnston
Please don't cross-post.

On Mon, Jun 22, 2020 at 1:35 PM Pavan Kumar  wrote:

> scram-sha-256 encrypted passwords are supported in .pgpass file ? If yes
> kindly provide us an example.
>
> I am using below format and it is not working for me
>
> *pglnx1*:*5432*:pgbouncer:*pgadmin*:"SCRAM-SHA-256$4096:6IDsjfedwsdpymp0Za7jaMew==$rzSoYL4ZYsW1WJAj7Lt3JtNLNR73AVY7sfsauikweblk][=:Hxx/juPXJZHy5djPctI=*"*
>
> The documentation doesn't say so one way or the other so I would go with
no.  The password in the pgpass file has to be the plaintext password.  The
client, upon speaking with the server, will decide whether to send the
plaintext password to the server or encrypt it prior to transmission.

What would be the point of storing the encrypted password instead of the
plaintext one?

David J.


Re: create batch script to import into postgres tables

2020-06-19 Thread David G. Johnston
On Fri, Jun 19, 2020 at 7:33 AM Adrian Klaver 
wrote:

> On 6/19/20 7:17 AM, pepevo wrote:
> > I understand your post about "password does not take an argument, it is
> > meant to be used as is. The purpose is to force a password prompt." When
> > I used -W and --password=.  That's what I said I will try pgpassfile.
> > Thought it like mysq/oracle can indicate out without creating password
> > file.
>
> If you want to expose your password in the script file then:
>
> https://www.postgresql.org/docs/12/app-psql.html
>
> Usage
> Connecting to a Database
>
> "An alternative way to specify connection parameters is in a conninfo
> string or a URI, which is used instead of a database name. This
> mechanism give you very wide control over the connection. For example:
>
> $ psql "service=myservice sslmode=require"
> $ psql postgresql://dbmaster:5433/mydb?sslmode=require
>
> This way you can also use LDAP for connection parameter lookup as
> described in Section 33.17. See Section 33.1.2 for more information on
> all the available connection options."
>
> So:
>
> psql
> postgresql://PSmasteruser:mypassw...@hostname.amazonaws.com:5432/PSCIDR
>
> or
>
> psql 'dbname=PSCIDR user=PSmasteruser host=hostname.amazonaws.com port=
> 5432 password=mypassword '
>
>
Ok, so not "no way", but it's still a bad idea given the availability of
other better options.  Namely PGPASSWORD, .pgpass, or, less
desirably.pg_service.conf

The URI format that includes a password should be reserved for client
libraries and avoided when using psql (just the password part really though
I much prefer the service file option myself).

David J.


Re: create batch script to import into postgres tables

2020-06-19 Thread David G. Johnston
On Fri, Jun 19, 2020 at 6:58 AM Adrian Klaver 
wrote:

> On 6/19/20 6:53 AM, Pepe TD Vo wrote:
> > Thank you sir and I am sorry for the typo not having "--" on password.
> > I did spelling out with --password=mypassword
>
> Please go back and read my post again.
>

To be clear, there is no way to supply a password as a command line
argument.  It is fundamentally a bad idea and we don't even make it an
option.

You need to decide on one of the actual ways of supplying a password, or
choose an alternative authentication method like peer.

David J.


Re: Can the current session be notified and refreshed with a new credentials context?

2020-06-22 Thread David G. Johnston
On Mon, Jun 22, 2020 at 5:41 PM AC Gomez  wrote:

> But what I understand you to say is that, one can start running a function
> in PG, change all security context from under it, and it will still work
> under the original login context, despite the changes.
>

https://www.postgresql.org/docs/12/tutorial-transactions.html

David J.


Re: Persistent Connections

2020-06-23 Thread David G. Johnston
Why is there now a second thread for this topic?

On Tue, Jun 23, 2020 at 3:21 PM Bee.Lists  wrote:

> >
> > On Jun 23, 2020, at 4:51 PM, Michael Lewis  wrote:
> >
> > Do you see anything in pg_stat_activity that stays idle for a while and
> then *does* disappear on its own? Perhaps some types of connections are
> doing client side/application stuff before telling the DB to close the
> connection.
>
> I’m finding those queries sticking around.  These queries are very
> simple.  Last login type of stuff.
>
> > Idle means the query finished and that was the last query run. It isn't
> active or waiting on another process, that connection is open by idle.
>
> OK.  The page that I load up is a dashboard and has a handful of queries.
> From the looks of it, it looks like they’re still working, but idle.  But
> you’re saying they’re just open connections?  Why would they remain open?
>

"they (queries) are still working, but idle" - your terminology is
problematic and it is probably affecting your understanding.  As I said on
the other thread you should probably post the actual output you are
commenting on if you want to actually move this discussion forward.

> It sounds like a good time to set one up.
>
> OK, some further questions:
>
> Who do the connections belong to?  Not the client, not the server
> (apparently).  Is there one that’s independent and behaves as the front end
> of connection management?
>

As I asked on the other thread: a connection is a link between two
parties.  What does it mean to "belong to" in this context?  You have
mis-interpreted Tom's answer from the other thread.

> I would increase the limit directly, or with a pooler and research which
> connections are behaving, and which are taking too long to close or not
> closing at all. You could set up a process to snapshot pg_stat_activity
> every minute or 5 and trace which pids are terminating properly, and/or
> make logging very verbose.
>
> How do I go about researching connection behaviour?  I guess a pooler
> should be investigated first.


Until you get a better grasp of the basics you should not be introducing
any more moving parts.  If anything you need to remove some in order to
figure out which one of the existing parts is causing your problem.


> Once I find culprits, what options do I have?  Not sure why new
> connections are made when these idle past connections seem valid and
> usable.
>

Not sure how you expect an answer to "how do I fix the problem" without an
understanding of what the problem is.

There is agreement that ORMs shouldn’t be managing a connection pool, and
> this doesn’t achieve to do that.  I’ll be looking into a pooler.  This
> client (the gem is Sequel, btw) uses what it assumes are valid connections,
> but that’s where it fails as the database apparently disconnects
> prematurely.  The gem has a procedure to check how long since the last pool
> was investigated for legit connections, but I think that’s irrelevant.
> It’s finding what it’s told are legit connections, which are not.  It’s
> been lied to.
>

That the pooler in your application is being lied to is probably the most
likely answer, as Tom said in the other thread.  But the rest of what you
are saying here just sounds like nonsense.  "I'll be looking into a pooler.
... The gem has a procedure to check how long since the last pool was
investigated for legit connections ...".  I'd like to point out that you
seem to be saying that you simultaneously have a connection pool and don't
have a connection pool involved here...

As I suggested on the other thread, and repeated above, you need to build
up a script that can reproduce the problem.  Something that can open a
connection to the server and then wait for a period of time before
executing a query against it to see if that session gets dropped on the
client side while still remaining visible on the server.

David J.


Re: EXTERNAL: Re: Netapp SnapCenter

2020-06-25 Thread David G. Johnston
On Thu, Jun 25, 2020 at 8:24 AM Paul Förster 
wrote:

> Archived WAL is another thing, but PGDATA and pg_wal should IMHO always be
> located on the same volume, along with tablespaces, if any.
>

My understanding that having such a setup (single volume) eases
administration at the cost of performance.  It is perfectly acceptable to
favor the performance benefits, you just need to ensure that your
administration of the server is robust enough to handle it properly.  It
sounds like this product, though, doesn't provide the correct depth of
administration while recommending the performance weighted setup.

David J.


Re: libpq pipelineing

2020-06-27 Thread David G. Johnston
On Friday, June 26, 2020, Samuel Williams 
wrote:

> > What about, as it says, sending multiple statements in a single
> sendQuery and then polling for multiple results?
>
> I tried this, and even in single row streaming mode, I found that
> there are cases where the results would not be streamed until all the
> queries were sent.
>
> From the users point of view, they may generate a loop sending
> multiple queries and don't care about the result, so a pipeline/batch
> processing is ideal to avoid RTT per loop iteration, if database
> access is slow, this can be a significant source of latency
>

 I don’t have any insight into the bigger picture but I’d concur that no
other option is documented so what you desire is not possible.

David J.


Re: libpq pipelineing

2020-06-26 Thread David G. Johnston
On Friday, June 26, 2020, Samuel Williams 
wrote:

> Hello,
>
> Using the asynchronous interface of libpq, is it possible to pipeline
> multiple queries?
>
> i.e.
>
> PQsendQuery(query1)
> PQsendQuery(query2)
>
> followed by
>
> query1_results = PQgetResult(...)
> query2_results = PQgetResult(...)
>
> I tried it but got "another command is already in progress" error.
>

The documentation seems to leave zero ambiguity:

 After successfully calling PQsendQuery, call PQgetResult one or more times
to obtain the results. PQsendQuery cannot be called again (on the same
connection) until PQgetResult has returned a null pointer, indicating that
the command is done.

David J.


Re: libpq pipelineing

2020-06-26 Thread David G. Johnston
On Friday, June 26, 2020, Samuel Williams 
wrote:

> Thanks David,
>
> You are correct.
>
> I was giving an example of what I was hoping to achieve, not what I
> expected to work with the current interface.
>

What about, as it says, sending multiple statements in a single sendQuery
and then polling for multiple results?

David J.


Re: Unable to execute pg_dump

2020-06-14 Thread David G. Johnston
On Sunday, June 14, 2020, Joseph Maruca  wrote:
>
> '''sudo -u postgres -H --psql -px -d db_name'''
>
> If I enter the following syntax from the RHEL command line:
>
> '''sudo su postgres'''
>
> I end up in the bash-4.1 shell. When executing the following command from
> within the shell: bash-4.1$ pg_dump db_name > /tmp/my_database.sql I am
> presented with the following error:
>
> pg_dump: [archiver (db)] connection to database "db_name" failed: could
> not connect to server: No such file or directory Is the server running
> locally and accepting connections on Unix domain socket
> "/var/run/postgresql/.s.PGSQL.5432"?
>
> Not a RHEL user but...

For psql you specify the port but not for pg_dump.
For psql you seem to be switching to the postgres user’s environment but
don’t do that for pg_dump.

In short it seems like you need to trace down your O/S environment
differences when each command is run.

David J.


Re: create batch script to import into postgres tables

2020-06-19 Thread David G. Johnston
On Friday, June 19, 2020, pepevo  wrote:

> But everything can run by script on the server, right?
>

Separation of concerns.  The server with the database cluster should
probably not be running application code.  Application code can be run
other machine, “admin” machine is one label.  Though for development it
shouldn’t matter so long as the application is configurable.  Configure it
for local during development and when in production it pulls production
configuration.

David J.


Re: create batch script to import into postgres tables

2020-06-19 Thread David G. Johnston
On Friday, June 19, 2020, pepevo  wrote:

> We can't just install any softwares without Goverment's approval.  Also,
> they might ask Oracle/mysql/sql can run batch script, why not Postgres?  I
> wonder myself and just realize today from this email.
>

PostreSQL isn’t the issue here, you are.  To the extent that others
attempting to help you over email are insufficient is a failing driven
mainly by the medium of choice probably being an inefficient medium for the
student.  What you want to do is possible but it requires understanding
multiple applications.  You are better off getting closer to your goal by
reading books and articles about those applications and then asking better
questions.  Or at minimum being more detailed in stating your goal and
requirements.

David J.


Re: Need help with PITR for PostgreSQL 9.4.5

2020-06-24 Thread David G. Johnston
The version you are running is neither up-to-date for its major version
(9.4) nor is the major version being supported.

https://www.postgresql.org/support/versioning/

Thoug a functioning backup is good to have before upgrading, especially
major versions.

On Wednesday, June 24, 2020, Sri Linux  wrote:
>
> I am trying to do PINT backup and recovery for the standalone
> database which is not in a cluster.
>

That is not possible.  Its unclear what exactly you mead/did though.
Consider sharing actual commands/scripts.

< 2020-06-22 16:34:08.487 CDT >LOG:  WAL file is from different database
> system: WAL file database system identifier is 6840038030519879233,
> pg_control database system identifier is 6841285371464745049.
>

You cannot PITR if the WAL doesn’t match the base backup for the cluster
you are trying to restore.

https://www.postgresql.org/docs/9.4/continuous-archiving.html

Or, probably better, consider using a third-party system.

David J.


Re: gdal, proj and other postgis dependencies missing in postgres repos

2020-06-07 Thread David G. Johnston
On Sunday, June 7, 2020, Rene Romero Benavides 
wrote:

> On Sun, Jun 7, 2020 at 5:37 PM Rene Romero Benavides <
> rene.romer...@gmail.com> wrote:
>
>> Hi everybody, do you know what happened to gdal and other postgis
>> dependencies like proj in the official postgres repos?
>> they appear to be missing in these repos
>>
>> https://yum.postgresql.org/12/redhat/rhel-7-x86_64/
>> https://yum.postgresql.org/11/redhat/rhel-7-x86_64/
>> https://yum.postgresql.org/10/redhat/rhel-7-x86_64/
>>
>> they used to be there, right? Thank you.
>>
>
 https://yum.postgresql.org/news-newreporpmsreleased.php

David J.


Re: Can we get SQL Server-like cross database queries

2020-06-04 Thread David G. Johnston
On Wednesday, June 3, 2020, Laurenz Albe  wrote:

> On Thu, 2020-06-04 at 16:41 +1200, Thomas Munro wrote:
> > There's no doubt it's useful, and it's also part of the SQL spec,
> > which says you can do catalog.schema.table.  I would guess that we
> > might get that as a byproduct of any project to make PostgreSQL
> > multithreaded.  That mountain moving operation will require us to get
> > rid of all the global state that currently ties a whole process to one
> > session and one database, and replace it with heap objects with names
> > like Session and Database that can be passed around between worker
> > threads.
>
> I am -1 on cross-database queries.
>
> I think it is a desirable feature to have databases isolated from
> each other, so you don't have to worry about a permission you forgot
> that allows somebody to access a different database.
>
> I think this is particularly relevant since all databases share the
> same users.
>
> I understand that sometimes the opposite would be desirable, but
> foreign data wrappers have alleviated that pain.
>

I agree with the conclusion but not so much with the premise.  Even with
global users you still need to grant permissions to individual databases
and its debatable whether its “more safe” to prevent a user from directly
accessing a database in the “catalog.schema” reference manner if they can
do so with a direct login.

I agree with the general premise that modularity and isolation are
generally positive qualities, especially as scale grows, and that expending
considerable resources strictly for the goal of adding this capability to
the system is not a direction that I would be in favor of.  Now, if the
prereqs for this feature also have other concrete benefits that are worth
working toward, and in the end the sum of those makes cross-database
queries a relatively simple matter, I would entertain putting in the last
10% of effort to become standard compliant.

David J.


Re: checking existence of a table before updating its SERIAL

2020-06-08 Thread David G. Johnston
On Monday, June 8, 2020, Matthias Apitz  wrote:

>
> Can some kind soul help me with doing a test for the existence of the
> table to avoid the error message about non existing relation?
>


 https://www.postgresql.org/docs/12/catalogs-overview.html

David J.


Re: Is it possible to use keywords (date units) in a function definition?

2020-06-08 Thread David G. Johnston
On Mon, Jun 8, 2020 at 2:57 PM Alistair Johnson  wrote:

>
> RETURN EXTRACT(datepart FROM end - start);
> Any ideas? Is this even possible?
>

Use the "date_part" function.

David J.


Re: A parsing question

2020-06-03 Thread David G. Johnston
On Wed, Jun 3, 2020 at 3:41 PM Michael Nolan  wrote:

> Recently I was typing in a query in PG 10.4.
>
> What I MEANT to type was:   Where xyz >= 2400
>
> What I actually typed was:  Where xyz >- 2400
>
> The latter was interpreted as 'where xyz > -2400', but I'm wondering if it
> shouldn't have thrown an error on an unrecognized operator '>-'
>

>From the syntax section of the documentation:

A multiple-character operator name cannot end in + or -, unless the name
also contains at least one of these characters:

~ ! @ # % ^ & | ` ?

For example, @- is an allowed operator name, but *- is not. This
restriction allows PostgreSQL to parse SQL-compliant queries without
requiring spaces between tokens.

David J.


Re: suggestion: psql configs in .config

2020-06-11 Thread David G. Johnston
On Thursday, June 11, 2020, Caleb Cushing  wrote:

> would it be possible to allow psql config files to reside in
> ~/.config/psql to help unclutter ~ obviously this should be some kind of
> cascading lookup
>
> first look for... e.g
> .config/psql/psqlrc
> .psqlrC
>

libpq consults environment variables to allow for user customization.

David J.


Re: Returning SELECTed rows immediately instead of all at the end?

2020-06-12 Thread David G. Johnston
On Friday, June 12, 2020, Ron  wrote:

>
> I'm running amcheck on a set of indices (test machine, not prod) and want
> to track the progress.  Is there a SELECT clause that makes rows display as
> they are created,


No


> or do I have to explicitly call bt_index_check() from a shell script or
> SQL function in order to see the output as each index is checked?
>

You could wrap the function call in a custom plpgsql function (or just do a
plpgsql loop) and side channel output via notice but i’d probably just do a
shell script wrapper absent any other constraint.

David J.


Re: Is it possible to use keywords (date units) in a function definition?

2020-06-08 Thread David G. Johnston
On Mon, Jun 8, 2020 at 5:06 PM Martin Gainty  wrote:

> CREATE OR REPLACE FUNCTION DATETIME_DIFF(end TIMESTAMP(3), start
> TIMESTAMP(3), datepart UNIT) RETURNS DOUBLE PRECISION AS $$
>

Duplicate email from account (same sender) - already answered on the
original/correct thread.

David J.


Re: Is it possible to use keywords (date units) in a function definition?

2020-06-08 Thread David G. Johnston
On Mon, Jun 8, 2020 at 2:57 PM Alistair Johnson  wrote:

> Hello,
>
> I recently tried to write a wrapper function to calculate the difference
> between two dates, mainly as a convenience. I'd essentially be emulating
> EXTRACT( FROM date1 - date2), in various ways. I got a bit stuck
> on allowing specification of the : is this possible in function
> definitions? I'd like to be able to write something along the lines of:
>
> CREATE OR REPLACE FUNCTION DATETIME_DIFF(end TIMESTAMP(3), start
> TIMESTAMP(3), datepart UNIT) RETURNS DOUBLE PRECISION AS $$
> BEGIN
> RETURN EXTRACT(datepart FROM end - start);
> END; $$
> LANGUAGE PLPGSQL;
>
> One option would be to treat datepart as a string, but this doesn't work
> for my use case. (Background: I'm trying to refactor a bunch of SQL scripts
> to work on Google BigQuery and PostgreSQL by writing PostgreSQL functions
> to emulate BigQuery functions. Unfortunately BigQuery does not recognize
> the third argument if it is a string (i.e. 'HOUR' does not work but HOUR
> does)).
>
> Any ideas? Is this even possible?
>
>
I think you need to be more specific as to what "this" means.

Looking again after Andrian's comment are you trying to write, in the
script file:

datetime_diff('start time as string'::timestamp, 'end time as
string'::timestamp, HOUR)

and get PostgreSQL to recognize the value HOUR as a custom type
value without single quotes surrounding it

If that is the question the answer is no.  The only type literals that can
be written without single quotes are numbers.

The parsing of SQL can handle some standard mandated non-quoted constants
but they are basically keywords, not values.

David J.


Re: pg_service.conf and client support

2020-06-13 Thread David G. Johnston
On Saturday, June 13, 2020, Niels Jespersen  wrote:

> Can anyone shed som light on the ubiquitousness of support for
> pg_service.conf?
>
AFAIK same non-support for JDBC and Node.js

> Are there any other mechanisms with broader support, that can be used
> instead of pg_service.conf (if support is scarce beyond what builds on
> libpq)?
>
> Not that I am aware, though programmatically accessed libraries are
somewhat able to be adapted using custom wrappers so that the pg_service
file contents can be leveraged.

David J.


Re: CASCADE/fkey order

2020-07-22 Thread David G. Johnston
On Wed, Jul 22, 2020 at 9:03 AM Samuel Nelson 
wrote:

> seems to fix it to work as we were expecting.  Is that particularly
> costly?  Should I only set the constraint to be deferred when we really
> need it?  Would it be more efficient to perform the deletes explicitly
> within a transaction rather than relying on the cascades and deferring that
> one constraint?
>

I don't know.  I tend to go with only deferring the check if the specific
transaction requires it.  If there are no issues I would presume that
checking at the end would be more efficient.  But if there are problems you
could end up performing unnecessary work.  Memory consumption probably
increases as well since constraint related information cannot be discarded
as each command completes but must be kept around for the eventual
validation.

David J.


Re: psql \r changed behavior in pg10

2020-07-22 Thread David G. Johnston
On Wednesday, July 22, 2020, Emanuel Araújo  wrote:

>
> \r
> \e
> -> Open temp file with the same last command "select 1;"
> is it right?
>
>
Documentation since v10:

Or, if the current query buffer is empty, the most recently executed query
is copied to a temporary file and edited in the same fashion.

David J.


Re: How to create function returning numeric from string containing percent character

2020-07-22 Thread David G. Johnston
On Wed, Jul 22, 2020 at 3:50 AM Andrus  wrote:

> val function should return numeric value from string up to first non-digit
> character, considering first decimal point also:
>
> val('1,2TEST')  should return 1.2
> val('1,2,3')  should return 1.2
> val('-1,2,3')  should return -1.2
>
> SELECT coalesce(nullif('0'||substring(Translate($1,',','.'),
> '^-?[0-9]+\.?[0-9]*$'),''),'0')::numeric;
> select val('1,2%')
> How to force it to return 1.2 ?
>
> It should work starting from Postgres 9.0


Removing the $ from your regexp pattern should work for the 4 provided
examples.

You cannot remove stuff from the end of a string if you require that the
end of said string match what you want to return.

David J.


Re: psql \r changed behavior in pg10

2020-07-22 Thread David G. Johnston
On Wednesday, July 22, 2020, David G. Johnston 
wrote:

> On Wednesday, July 22, 2020, Emanuel Araújo  wrote:
>
>>
>> \r
>> \e
>> -> Open temp file with the same last command "select 1;"
>> is it right?
>>
>>
> Documentation since v10:
>
> Or, if the current query buffer is empty, the most recently executed query
> is copied to a temporary file and edited in the same fashion.
>

I believe \e is now working as intended but assuming it worked differently
in 9.6 the behavior change did not get noticed and so no release note entry
was added for it.  Adding the new \if meta commands in v10 resulted in
reworking of the code probably causing this to change.  We fixed the docs
to match the expected behavior which was seen in v10 when the doc patch was
written.

https://github.com/postgres/postgres/commit/e984ef5861df4bc9733b36271d05763e82de7c04

David J.


Re: About compress in pg_dump

2020-07-17 Thread David G. Johnston
On Fri, Jul 17, 2020 at 7:49 AM Edmundo Robles  wrote:

> To backup  a database  I do:
>  nice -n +19  pg_dump -Fc  database | nice -n +19 gzip --rsyncable   -nc
> >  database.dump
>
> If -Fc  option  is compressed  by default  I dont need gzip the backup,
> but I need pass --rsyncable  and -n options.
>
> How can  I pass  gzip options  to compress in pg_dump?
>

pg_dump isn't using the gzip program, it's just performing compression per
the gzip compression specification, and doesn't provide those two features
to control it's processing (or any features beyond what's documented on the
pg_dump reference page).

David J.


Re: PostgreSQL make too long to start.

2020-07-17 Thread David G. Johnston
On Fri, Jul 17, 2020 at 9:16 AM FOUTE K. Jaurès 
wrote:

> It is make sense that PostgreSQL make too long to start, About 20
> minutes.  I'm using PostgreSQL 12 intalling on Ubuntu Server 18.04 and my
> database is about 25 GO  of data.
>

Every time?  How are you shutting down the server?

Additionally, you will want to examine, and probably relay, the contents of
the log file during these shutdown/startup periods.

David J.


Re: Is upper_inc ever true for dateranges?

2020-07-28 Thread David G. Johnston
On Tue, Jul 28, 2020 at 2:19 PM Ken Tanzer  wrote:

> So here's my question.  Will the upper_inc function always return false
> for a non-null daterange?  And if so, what's the point of the function?
> And/or is it different for other kinds of ranges?
>

Ranges over discrete types are always canonicalized while ranges over
non-discrete types (i.e., float) cannot.

David J.


Re: Re: PG 9.5.5 cores on AIX 7.1

2020-07-19 Thread David G. Johnston
On Sun, Jul 19, 2020 at 11:04 AM Abraham, Danny 
wrote:

>
> Customer is using 10.4 , not 9.5.5.
>
> Does the same argument apply for upgrading to 10.12 ?
>

Running the current minor release of PostgreSQL is a pre-req when reporting
problems; moreso when it's largely impractical for someone else to
duplicate the problem.

David J.


Re: Problem with pg_service.conf

2020-07-23 Thread David G. Johnston
On Thu, Jul 23, 2020 at 6:12 AM Michał Lis  wrote:

> Hello,
>
> The server is located in the lan and runs on Windows 7x64 Ultimate.
> On this server I created pg_service.conf file and set the environment
> variable of PGSERVICEFILE.
>

The server software (postgres) doesn't use PGSERVICEFILE, only client
software does (psql, pg_dump, etc.).


> Coping the pg_service.conf file from server to the client and setting 
> environment
> variable PGSERVICEFILE to this file has no sense, because of possibility
> of storing login and password in this file.
>

Replace "copying" (Google incorrectly wants to replace copying with
coping...) with "moving"; or just "create a pg_service.conf file on the
local machine with identical contents".

I want to connect to the server by service name, because I don't want to
> store any login information on the client side.
>

What you are describing is simply not possible.  You can avoid storing
credentials on the client if you wish but you need a person to enter them
manually each time.  If you want any form of self-authentication by the
client then the credentials the client uses must be made available to it
somehow.  Searching the internet will surely turn up many such options for
this.  But it is nonsense to say that the client need only supply an
unencrypted shared secret (i.e., the service name) to the server and be
granted access.  That is no better than storing a username and password on
the local machine.

At this moment I have the information that the copy of pg_service.conf file
> is necessary on the client side to establish connection by service name.
>

PostgreSQL has a client-server architecture.  pg_service.conf is used only
by the client.  In your local machine example the client and the server
software are on the same machine and so the fact that the pg_service.conf
file was only being read by the client was not apparent.  Once you move the
client to a machine that lacks a pg_service.conf file the client rightfully
complains that it cannot find a service entry that exists on the server.
If you want both the local machine and remote machine client software to be
aware of the service name then both machines need their own pg_service.conf
file with that service definition.  The client is not continually
downloading the pg_service.conf file from the server (if you wanted a
shared pg_service.conf file you could do so but you would probably want a
separate configuration machine to provide it, not the PostgreSQL server).
And the server doesn't recognized service names supplied by the client - it
only recognizes credentials and a database name (and other configuration
settings too)

David J.


Re: CASCADE/fkey order

2020-07-22 Thread David G. Johnston
On Wed, Jul 22, 2020 at 8:24 AM Samuel Nelson 
wrote:

> Is there a way to force the delete to cascade to tables in a specific
> order?
>

No really, but you can defer constraint checking.

https://www.postgresql.org/docs/12/sql-set-constraints.html

David J.


Re: Fwd: How to connect PostgreSQL (9.6.17) database from Macbook Pro Terminal using JumpCloud password?

2020-07-16 Thread David G. Johnston
On Thu, Jul 16, 2020 at 1:24 PM Adrian Klaver 
wrote:

> On 7/16/20 1:17 PM, Devraj B wrote:
>
> Please reply to list also.
> Ccing list.
> > Thanks Adrian,
> >
> > I had granted LOGIN to  PostgreSQL user  firstname.lastname but do Not
> > want to provide a database password,
> > rather I wanna access the database using my JumpCloud password directly
> > from my Macbook Pro using LDAP authentication or any other
> authentication.
> >
> > Please suggest me following:
> >
> >> But I want to setup JumpCloud or LDAP or any other authentication so
> >> that I can connect PostgreSQL user  "firstname.lastname" directly from
> >> my Macbook Pro Terminal using my JumpCloud Password. Like:-
>
> That's outside my knowledge, so someone else is going to have to jump in
> on this.
>

It is quite possible this can be done.  I mean, there are 11 different
authentication methods mentioned in the documentation.  But the thing is,
actually setting up the link between the external authentication method and
PostgreSQL takes knowledge and skills outside the normal DBA's skillset and
outside the responsibility of PostgreSQL.  It may be that there is
information on the Internet, or someone chimes in (though 2 business days
without a response means I wouldn't hold my breath), but the documentation
is only going to be of limited use - but is important none-the-less.

My suggestion to the OP is to just get password or other PostgreSQL-only
authentication working, possibly with the help of PostgreSQL people, and
then decide whether the convenience of what is basically single-sign-on is
worth the learning curve.

David J.


Re: Is postgres able to share sorts required by common partition window functions?

2020-07-06 Thread David G. Johnston
On Monday, July 6, 2020, Michael Lewis  wrote:

> Did you say you have an index on c1?
> [...]
> I don't know the data, but I assume there may be many rows with the same
> c1 value, so then you would likely benefit from getting that distinct set
> first like below as your FROM table.
>

Re-reading the original email I see both the answer to your question and
the data being queried.

David J.


Re: Is postgres able to share sorts required by common partition window functions?

2020-07-06 Thread David G. Johnston
On Monday, July 6, 2020, Sebastien Arod  wrote:

> I would have expected postgresql to "share" a preliminary sort on c1 that
> would then be useful to reduce the work on all window functions but it
> doesn't.
>

The plan shown does share - the output of one sort goes into another.
Subsequent sorts still have to happen but they should be faster as the
first field is already grouped.  Doesn’t change the plan though.


> I even created an index on c1 hoping that postgresql would be able to use
> it in order to minimize the cost of the sorts but I couldn't make it use it.
>

Use it how?  You are still evaluating 250k groups so doing anything
piece-wise seems like an expected loss compared to sequential scan.

David J.


Re: Bytea Example

2020-08-16 Thread David G. Johnston
On Sun, Aug 16, 2020 at 10:11 AM Naveen Kumar  wrote:

> *"PostgreSQL 12.3, compiled by Visual C++ build 1914, 64-bit"*
>
> I am just trying to practice LOB objects, like byteA data type, in
> PostgreSQL. Unfortunately, I didn't find good links regarding this so I
> raised the issue. Nothing else I am doing.
>
>>
>>
You should supply code that demonstrates what you have tried so far and the
errors you are getting (or commented out pseudocode in the places where you
are not sure what to write).
David J.


Re: CASE WHEN idiomatic for functions with side-effect?

2021-01-12 Thread David G. Johnston
On Tue, Jan 12, 2021 at 1:14 AM Joel Jacobson  wrote:

> Is it idiomatic and safe to use
>
> SELECT
>   CASE boolean_expression WHEN TRUE THEN function_with_side_effects() END
>

As long as function_with_side_effects() is defined volatile it is forced to
be executed at runtime, once per row.  That situation is always protected
by the case expression.


> "related cases that don't obviously involve constants can occur in queries
> executed within functions, since the values of function arguments and local
> variables can be inserted into queries as constants for planning purposes.
> Within PL/pgSQL functions, for example, using an IF-THEN-ELSE statement to
> protect a risky computation is much safer than just nesting it in
> a CASE expression."
>
> The affected real code:
> https://github.com/truthly/uniphant/blob/rls/FUNCTIONS/api/verify_assertion.sql
>

The relevant function takes in a column argument - it is thus impossible
for the planner to evaluate the expression.  And, as above, the planner
respects the "volatile" attribute of functions.

David J.


<    1   2   3   4   5   6   7   8   9   10   >