Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread David Pacheco
On Mon, Nov 6, 2017 at 12:35 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> David Pacheco <d...@joyent.com> writes:
> > I ran into what appears to be a deadlock in the logging subsystem.  It
> > looks like what happened was that the syslogger process exited because it
> > ran out of memory.  But before the postmaster got a chance to handle the
> > SIGCLD to restart it, it handled a SIGUSR1 to start an autovacuum worker.
> > That also failed, and the postmaster went to log a message about it, but
> > it's blocked on the pipe that's normally connected to the syslogger,
> > presumably because the pipe is full because the syslogger is gone and
> > hasn't read from it.
>
> Ugh.
>


Should I file a bug on this issue?

Thanks,
Dave


Re: [GENERAL] Migrating money column from MS SQL Server to Postgres

2017-11-08 Thread David G. Johnston
On Wednesday, November 8, 2017, Igal @ Lucee.org <i...@lucee.org> wrote:
>
> Kettle throws an error though:  column "discount" is of type money but
> expression is of type double precision.
>
> The value in the offending insert is:  0.0
>
> Why does Postgres decide that 0.0 is "double precision" (which is a weird
> name in my opinion -- why can't it just be double) and not money?
>
The lack of quotes surrounding the value is significant.  Money input
requires a string literal.  Only (more or less) integer and double literal
values can be written without the single quotes.

David J.


Re: [GENERAL] Naming conventions for column names

2017-11-07 Thread David G. Johnston
On Mon, Nov 6, 2017 at 10:30 PM, Sachin Kotwal <kotsac...@gmail.com> wrote:

>
> Please committers give their final view on this.
>
>
​They, and others, have - its a "don't want".​

IOW, don't expend any effort since that effort will have been wasted - not
that it would take zero effort to accomplish.

If there is an addition to the system catalogs overview page of the docs
that describes the naming convention I think that would be swell.

David J.


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-06 Thread David Pacheco
On Mon, Nov 6, 2017 at 12:35 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> David Pacheco <d...@joyent.com> writes:
> > ... that process appears to have exited due to a fatal error
> > (out of memory).  (I know it exited because the process still exists in
> the
> > kernel -- it hasn't been reaped yet -- and I think it ran out of memory
> > based on a log message I found from around the time when the process
> > exited.)
>
> Could we see the exact log message(s) involved?  It's pretty hard to
> believe that the logger would have consumed much memory.



Thanks for the quick reply!

Based on kernel state about the dead but unreaped syslogger process, I
believe the process exited at 2017-10-27T23:46:21.258Z.  Here are all of
the entries in the PostgreSQL log from 23:19:12 until the top of the next
hour:
https://gist.githubusercontent.com/davepacheco/c5541bb464532075f2da761dd990a457/raw/2ba242055aca2fb374e9118045a830d08c590e0a/gistfile1.txt

There's no log entry at exactly 23:46:21 or even immediately before that,
but there are a lot of "out of memory" errors and a FATAL one at 23:47:28.
Unfortunately, we haven't configured logging to include the pid, so I can't
be sure which messages came from the syslogger.

There are also many log entries for some very long SQL queries.  I'm sure
that contributed to this problem by filling up the pipe.  I was able to
extract the contents of the pipe while the system was hung, and it was more
of these giant query strings.

I think it's likely that this database instance was running in a container
with way too small a memory cap for the number of processes configured.
(This was a zone (a lightweight container) allocated with 2GB of memory and
configured with 512MB of shared_buffers and up to 200 connections.)  I
expect that the system got to a persistent state of having basically no
memory available, at which point nearly any attempt to allocate memory
could fail.  The syslogger itself may not have been using much memory.

So I'm not so much worried about the memory usage itself, but it would be
nice if this condition were handled better.  Handling out-of-memory is
obviously hard, especially when it means being unable to fork, but even
crashing would have been better for our use-case.  And of course, there are
other reasons that the syslogger could exit prematurely besides being low
on memory, and those might be more recoverable.

Thanks,
Dave


Re: [GENERAL] idle in transaction, why

2017-11-06 Thread David G. Johnston
On Mon, Nov 6, 2017 at 12:32 PM, Rob Sargent <robjsarg...@gmail.com> wrote:

> Using postgres 10-beta3 (hopefully 10.0 this week) on virtual CentOS7 and
> this JDBC driver postgresql:42.1.4
>
>
> The postgresql.conf file has
>
> #idle_in_transaction_session_timeout = 0# in milliseconds, 0 is
> disabled
>
>
​There are numerous places where default settings can be configured.

https://www.postgresql.org/docs/10/static/config-setting.html

You should probably login as your application user and do "show
idle_in_transaction_session_timeout" to see what a clean session has for a
value and then figure out from there where that value is coming from.

David J.


[GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-06 Thread David Pacheco
Hello,

I ran into what appears to be a deadlock in the logging subsystem.  It
looks like what happened was that the syslogger process exited because it
ran out of memory.  But before the postmaster got a chance to handle the
SIGCLD to restart it, it handled a SIGUSR1 to start an autovacuum worker.
That also failed, and the postmaster went to log a message about it, but
it's blocked on the pipe that's normally connected to the syslogger,
presumably because the pipe is full because the syslogger is gone and
hasn't read from it.  The net result is that new connections to PostgreSQL
hang, and any query that causes PostgreSQL to log appears to hang, though
queries appear to work on existing connections as long as they do not cause
the backend to log anything.  I don't believe the system would ever get out
of this state.

I ran into this with version 9.2.4 on SmartOS (illumos), but it doesn't
look to me like any of the relevant code has changed in 9.6 or 10, nor is
it platform-specific.  Version 10.0 still:

- restarts the syslogger in response to a SIGCHLD signal (in "reaper")
- starts autovacuum workers in response to SIGUSR1 (in "sigusr1_handler")
- mask SIGCHLD at the beginning of the SIGUSR1 handler
- potentially logs from StartAutoVacWorker (and, of course, many other
places in the postmaster)

which I think means the problem is still possible.  Am I missing
something?  I couldn't find anything about this issue.  Is this just a
rarely seen bug?



Here's the raw data: my initial observation was that when I ran "psql" to
connect to the database, it would hang (for hours, it turned out).  My
postmaster process was blocked here, writing to its own stderr pipe:

$ mdb core.41349
Loading modules: [ libumem.so.1 libc.so.1 ld.so.1 ]
> $C
fd7fffdf9df0 libc.so.1`__write+0xa()
fd7fffdfb220 write_pipe_chunks+0x142()
fd7fffdfb280 send_message_to_server_log+0x55b()
fd7fffdfb2a0 EmitErrorReport+0xe6()
fd7fffdfb390 errfinish+0x1ac()
fd7fffdfb3b0 StartAutoVacWorker+0x5c()
fd7fffdfb3e0 StartAutovacuumWorker+0x6f()
fd7fffdfb410 sigusr1_handler+0x185()
fd7fffdfb420 libc.so.1`__sighndlr+6()
fd7fffdfb4b0 libc.so.1`call_user_handler+0x1db(10, 0, fd7fffdfb520)
fd7fffdfb500 libc.so.1`sigacthandler+0x116(10, 0, fd7fffdfb520)
fd7fffdfb970 libc.so.1`__pollsys+0xa()
fd7fffdfba90 libc.so.1`pselect+0x1cb(5, fd7fffdfbaf0, 0, 0,
fd7fffdfbaa0, 0)
fd7fffdfbae0 libc.so.1`select+0x5a(5, fd7fffdfbaf0, 0, 0,
fd7fffdfdaf0)
fd7fffdffb40 ServerLoop+0xb5()
fd7fffdffbe0 PostmasterMain+0xec2()
fd7fffdffc00 main+0x23a()
fd7fffdffc10 _start+0x6c()

>From inspecting kernel state, I found that the only other file descriptor
on the system that corresponds to the other side of this pipe is in the
postmaster itself, which it keeps open in case it needs to restart the
syslogger.  Based on the code, I expected the syslogger subprocess to have
the pipe open, but that process appears to have exited due to a fatal error
(out of memory).  (I know it exited because the process still exists in the
kernel -- it hasn't been reaped yet -- and I think it ran out of memory
based on a log message I found from around the time when the process
exited.)  Then I expected the postmaster would have restarted it, but I
found that it currently has SIGCHLD both pending and masked:

$ pflags 41349
41349:  /opt/postgresql/9.2.4/bin/postgres -D /manatee/pg/data
data model = _LP64  flags = ORPHAN|MSACCT|MSFORK
sigpend = 0x00028000,0x,0x
 /1:flags = ASLEEP  write(0x2,0xfd7fffdf9e10,0x65)
sigmask = 0xfebff047,0xfff7,0x03ff

The signal mask may be system-specific, but I've decoded it and confirmed
that SIGCHLD is in "sigpend" as well as "sigmask".  And that makes sense
because the stack above indicates we're in "sigusr1_handler", which masks
SIGCHLD when it starts.

If it wasn't clear from the above, the logging configuration looks like
this:

log_destination = 'stderr'
logging_collector = on
log_directory = '/var/pg/'
log_filename = 'postgresql.log'

Thanks,
Dave


Re: [GENERAL] ERROR: invalid input syntax for integer: "INSERT"

2017-11-04 Thread David G. Johnston
On Saturday, November 4, 2017, Robert Lakes <robe...@propaas.com> wrote:
>
> Here's the error I am receiving - when I am attempting to insert a record
> into a table:
>
> ERROR:  invalid input syntax for integer: "INSERT"
> LINE 1: ...T INTO listings_cdc SELECT statement_timestamp(), 'INSERT', ...
>
>

The second column of listings_cdc has a type of integer.  The word INSERT
is not a valid value to store there.

David J.


Re: [GENERAL] explain analyze output: 0 rows, 1M loops

2017-11-01 Thread David G. Johnston
On Wed, Nov 1, 2017 at 12:25 PM, Justin Pryzby <pry...@telsasoft.com> wrote:

> On Wed, Nov 01, 2017 at 12:19:21PM -0700, David G. Johnston wrote:
> > On Wed, Nov 1, 2017 at 11:59 AM, Scott Marlowe <scott.marl...@gmail.com>
> > wrote:
> >
> > > So some of my output from an explain analyze here has a line that says
> > > this:
> > >
> > > ex Scan using warranty_order_item_warranty_order_id_idx on
> > > warranty_order_item woi_1 (cost=0.57..277.53 rows=6 width=137) (actual
> > > time=0.110..0.111 rows=0 loops=1,010,844)
> > >
> >
> > Not my strong suit but, I'm pretty sure that reads: "The index was
> queried
> > 1M+ times and none of those inqueries resulted in a record being found".
> > IIUC I'd be wondering why some form of hash join wasn't used...
>
> Except that:
>
> https://www.postgresql.org/docs/current/static/using-explain.html
> "... the loops value reports the total number of executions of the node,
> and
> the actual time and ROWS VALUES SHOWN ARE AVERAGES PER-EXECUTION."
>

I seem to recall a somewhat recent commit that dealt with this.  The
problem is that with 1M loops a small number of rows returned will be
indistinguishable from zero when computed as an average within finite
precision.

Seeing entire plans, and not just a single line of one, tends to help too.

David J.


Re: [GENERAL] explain analyze output: 0 rows, 1M loopa

2017-11-01 Thread David G. Johnston
On Wed, Nov 1, 2017 at 11:59 AM, Scott Marlowe <scott.marl...@gmail.com>
wrote:

> So some of my output from an explain analyze here has a line that says
> this:
>
> ex Scan using warranty_order_item_warranty_order_id_idx on
> warranty_order_item woi_1 (cost=0.57..277.53 rows=6 width=137) (actual
> time=0.110..0.111 rows=0 loops=1,010,844)
>

​Not my strong suit but, I'm pretty sure that reads: "The index was queried
1M+ times and none of those inqueries resulted in a record being found".
IIUC I'd be wondering why some form of hash join wasn't used...

David J.​


Re: [GENERAL] pgaduit - is there a way to audit a role

2017-10-31 Thread David Steele
On 10/30/17 6:48 PM, rakeshkumar464 wrote:
> Is there a way to audit a group like as follows
> 
> alter role db_rw set pgaudit.log = 'read,write,function,ddl'  
> 
> and then any user part of db_rw role can be audited automatically.  It does
> not seem to work if I connect to the db as rakesh who is part of db_rw role.

This will not work because settings (GUCs) on a role are not inherited
by roles (or users) that are members of that role.  This is a
characteristic of the roles system and not inherent to pgAudit.

-- 
-David
da...@pgmasters.net


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] From the "SQL is verbose" department, WINDOW RANGE specifications

2017-10-30 Thread David G. Johnston
The default range specification is:

RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW

It seems like a common second choice is to want:

RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING

Why did they have to make something so common take 49 characters that, for
seldom-using users, is nearly impossible to remember?

The following would seem to suffice:

{ RANGE | ROWS } ALL

I'd be happy to use non-portable syntax here...

As an aside, I'd vote to add the entire WINDOW syntax specification to the
Synopsis.  The main parameters section can remain as-is in order to aid in
reading comprehension - but having to search out the parameters area just
to remind oneself of the extremely verbose syntax is a bit annoying.  All
of the other sections are represented in both the main synopsis and the
parameters in this manner and I think WINDOW doesn't warrant an exception
(especially vis-a-via both the from_item and grouping_element
specifications).

David J.


[GENERAL] Make "(composite).function_name" syntax work without search_path changes?

2017-10-30 Thread David G. Johnston
CREATE SCHEMA altschema;
CREATE TYPE altschema.alttype AS ( altid text, altlabel text );

CREATE FUNCTION altschema.label(item altschema.alttype)
RETURNS text
LANGUAGE sql
AS $$
SELECT (item).altlabel;
$$;

WITH vals (v) AS (
SELECT ('1', 'One')::altschema.alttype
)
SELECT (v).label
FROM vals;

-- column "label" not found in data type altschema.alttype

SET search_path TO altschema;

WITH vals (v) AS (
SELECT ('1', 'One')::altschema.alttype
)
SELECT (v).label
FROM vals;

-- success

The system knows that the datatype being inspected is "altschema.alttype" -
would it be reasonable for the system to check for a function named "label"
in the same schema as the target type, "altschema", with the target
argument type and invoke it if present?

At this point I'm just writing: altschema.label(v) which is adequate but
not as clean.  I'm consciously trying to write queries that don't require
application schemas in the search path: including the joyous
operator(altschema.@@) syntax in some situations.  I suppose inference
could be considered in that situation as well.

David J.


Re: [GENERAL] Roles inherited from a role which is the owner of a database can drop it?

2017-10-30 Thread David G. Johnston
On Mon, Oct 30, 2017 at 12:25 PM, Ivan Voras <ivo...@gmail.com> wrote:

>
> 3. But they do log in with "developer" roles which are inherited from the
> owner role.
>
> ​[...]​

> I've tried it on a dummy database and it apparently works as described
> here. Is this by design?
>
>
​Not quite following but ownership is an inheritable permission; and even
if it was not SET ROLE is all that would be required.​  Any owner can drop
an object that it owns.


> What are the best practices for this sort of scenario where there is a
> single owner of all the schema (which is large), where developers need
> access to everything but cannot do something as drastic as dropping the dbs
> (and possibly tables)?
>

​Don't let developers into production databases...

Trusted people (and/or software) should be provided membership into
ownership groups.​  Developers should provide these people/programs with
vetted scripts to execute against production.  Developers can do whatever
they want on their local database instance with full schema-modifying
privileges.

"developers need access to everything" - there is a lot of nuance and
detail behind that fragment that is needed if one is going to develop a
data access and change management policy.

David J.


Re: [GENERAL] starting PG command line options vs postgresql.con

2017-10-30 Thread David G. Johnston
On Mon, Oct 30, 2017 at 6:48 AM, rakeshkumar464 <rakeshkumar...@outlook.com>
wrote:

> I would prefer using postgresql.conf.  what is the consensus in this forum
> regarding command line vs postgresql.conf.


​I suspect that most people administering a PostgreSQL database would
expect that the configuration file would be changed in lieu of passing
options via the command line.

Also if conflicting, which one
> takes priority.
>

​https://www.postgresql.org/docs/9.6/static/config-setting.html#AEN32498​

David J.


Re: [GENERAL] Old pg_clog files

2017-10-29 Thread David G. Johnston
On Sunday, October 29, 2017, Ron Johnson <ron.l.john...@cox.net> wrote:

> Hi,
>
> v8.4.17
>
> http://www.postgresql-archive.org/pg-clog-questions-td2080911.html
>
> According to this old thread,  doing a VACUUM on every table in the
> postgres, template1 and TAPd databases should remove old pg_clog files.
>
>
> However, while about 40 of them have been deleted, 183 still exist.  What
> did I do wrong?
>
>

Reading the old thread it sounds like it might require multiple vacuums to
affect complete removal.

David J.


Re: [GENERAL] How do I insert and update into a table of arrays of composite types via a select command?

2017-10-25 Thread David G. Johnston
On Wed, Oct 25, 2017 at 2:16 PM, Celia McInnis <celia.mcin...@gmail.com>
wrote:

> Got it, finally...
>
> insert into t_array select array[row((data_comp).*)::mytype[] from
> t_composite;
>
> I'm not sure why I need (data_comp).* rather than some of the other things
> that I tried and failed with...
>
>
​The ​unusual set of parentheses are so the parser interprets data_comp is
a column and not a table. Usually one write SELECT tbl.* FROM tbl so that
is the assumed meaning of "name".*

David J.


Re: [GENERAL] multiple sql results to shell

2017-10-23 Thread David G. Johnston
On Mon, Oct 23, 2017 at 7:08 AM, Mark Lybarger <mlybar...@gmail.com> wrote:

> I have this bash/sql script which outputs some curl commands.  the
> backticks causes it to get interpreted by the shell.   This works fine if
> there is one result, but when there are many rows returned, it looks like
> one shell command.
>
> any help on getting multiple rows returned to be executed by the shell
> would be appreciated!
>
> thanks!
>
> `psql -P "tuples_only=on" -h ${DB_HOST} -d ${DB_NAME} -U ${DB_USER} -c
> "select 'curl -X POST http://${REGISTER_HOST}:8080/' || source_id ||
> '/${MT}/' || model || '/' || site || '/backoffice/register' from
> myschema.events where source_id = $SOURCE_ID and ineffective_date is null"`
>
>
​You will need to, instead, "SELECT source_id, model, site​ FROM ..." to
return the raw record data to bash and then use bash's loop facilities to
dynamically generate and execute the curl command.

A second option, that I've never tried, is returning the full string but
not within a backtick command, then using bash looping simply invoke the
string like a normal command.

David J.


Re: [GENERAL] How to find out extension directory

2017-10-20 Thread David G. Johnston
On Fri, Oct 20, 2017 at 1:12 PM, rakeshkumar464 <rakeshkumar...@outlook.com>
wrote:

> I am documenting on automating installation of pgaudit extension for
> containers.  On my laptop I see that the directory where the files
> pgaudit.control and pgaudit--1.2.sql needs to be present is
>
> /usr/share/postgresql/10/extension.
>
> How do I know beforehand where the dir path is ?
>

I think pg_config (
https://www.postgresql.org/docs/current/static/app-pgconfig.html ) is what
you are looking for.

David J.


Re: [GENERAL] Is it OK to create a directory in PGDATA dir

2017-10-19 Thread David G. Johnston
On Thu, Oct 19, 2017 at 5:32 PM, John R Pierce  wrote:

> On 10/19/2017 1:25 PM, Tomas Vondra wrote:
>
> Is it fine to create a subdir inside PGDATA and store our stuff
> there, or will PG freak out seeing a foreign object.
>
>
> PostgreSQL certainly does not check if there are unknown directories in
> the data directory, and it will not crash and burn. But it causes all
> sorts of problems, and it increases the probability of human error.
>
>
> most importantly, ONLY the postgres system process should have access to
> the pgdata directory, it should have permissions 700.   your apps should be
> running as a different user, and that user won't have access to said PGDATA.
>
Untrusted languages in the server, and superuser use of COPY, count as
"applications" that are going to be running under the postgres user as far
as the O/S is concerned.

​I do agree that external applications should communicate with the server
via a session and not by mutual knowledge of a filesystem location.

I too would recommend not conflating system-related data that belongs in
PGDATA and application-related data that should reside outside of that
location.

​
​D
​avid J.​


Re: [GENERAL] Using Variables in Queries

2017-10-19 Thread David G. Johnston
On Thu, Oct 19, 2017 at 12:14 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> FROM  products,
>   (values ('red widget'::text)) consts(target)
> WHERE similarity(target, item_name) > 0.25
> ORDER BY target <<-> item_name
>
> PG 9.5 and up will flatten out cases like this to be exactly what you
> wrote out longhand.
>

​Does it matter if the values expression is embedded in a CTE?​  I find the
construct:

WITH constants AS (  VALUES (,,) )
SELECT * FROM tbl CROSS JOIN constants

nice as it keeps the constants at the top of the query string.  I presume
pure style differences between using multiple from items and an explicit
cross join.

David J.


Re: [GENERAL] Using Variables in Queries

2017-10-19 Thread David G. Johnston
On Thu, Oct 19, 2017 at 8:21 AM, Igal @ Lucee.org <i...@lucee.org> wrote:

> Is it still true (the posts I see on this subject are quite old) that I
> can not do so in Postgres outside of a stored procedure/function?  And if
> so, what's the reason of not adding this feature?  Seems very useful to me.
>
​​

​PREPARE sqlquery AS​ SELECT * FROM products WHERE col1 LIKE $1 OR col2
LIKE $1;
EXECUTE sqlquery('red widget');

Alban's DO blocks are problematic since they are incapable of generating a
result set.

As Scott said people needing this functionality in PostgreSQL are content
with using psql.

Adding lots of new custom syntax to pure server-side parsed SQL is a
non-trivial undertaking whose need is reduced by the alternatives so
described (functions, DO block, PREPARE, psql).

David J.


[GENERAL] COPY log row count feauture request

2017-10-19 Thread david . turon
Hi everyone,

i have question if is possible log count row of COPY command to 
csv/syslog. I know that there are some limitations like triggers BEFORE 
INSERT. Don't know if any others were pleased with this feature.

Have a nice day.

David 


-- 
-
Ing. David TUROŇ
LinuxBox.cz, s.r.o.
28. rijna 168, 709 01 Ostrava

tel.:+420 591 166 224
fax:+420 596 621 273
mobil:  +420 732 589 152
www.linuxbox.cz

mobil servis: +420 737 238 656
email servis: ser...@linuxbox.cz
-

Re: [GENERAL] Finally upgrading to 9.6!

2017-10-18 Thread David G. Johnston
On Wed, Oct 18, 2017 at 2:34 PM, Don Seiler <d...@seiler.us> wrote:

> On Wed, Oct 18, 2017 at 4:17 PM, Vik Fearing <vik.fear...@2ndquadrant.com>
> wrote:
>
>> On 10/18/2017 08:17 PM, Don Seiler wrote:
>>
>> > I disagree with this. It isn't my company's business to test the
>> > Postgres software in development, as much as it would be needed and
>> > appreciated by the community.
>>
>> Yeah, let others do it for you!  Great attitude.
>>
>
> It's a realistic, practical attitude. I'm sorry that not every company
> wants to offer the resources to contribute back to the community as much as
> you want. But it's foolish to expect a company to perform their development
> lifecycle against betas and RCs. They have their own products to worry
> about. A gallant few may let their DBAs do some sandbox testing to
> contribute time back to the community, but you can't expect them to.
>

​Both sides have made their point here - any more opinions or
justifications are going to just end up devolving into commentary that is
unacceptable on these lists.​  The community benefits from people who do
more than just run production servers while the business world has limited
resources to do not directly business related activities.  I feel that
those familiar with those dynamics are not surprised that someone would
choose to upgrade to 9.6.5 now since the 10.x series is still .0

If someone wants to espouse the business benefits of running pre-release
versions in staging environments against stable business code please start
a new thread focused on the "process" and not the "people".

David J.


Re: [GENERAL] Finally upgrading to 9.6!

2017-10-18 Thread David G. Johnston
On Wednesday, October 18, 2017, Joshua D. Drake <j...@commandprompt.com>
wrote:
>
> I am not sure why this is even a question. There are plenty of businesses
> that can risk the deployment of a .0 release but there are also *MANY THAT
> CAN NOT*. The proper way to do this is to have a staging server running the
> .0 release that gets beaten on by the application for a few months and
> reports anything back to the community they find.
>

The continuum goes from having a staging server follow master/HEAD to
upgrading one version once a year as the earliest supported release gets
de-supported.  The closer to the first position you are contributing back
to the community and also the more quickly you can benefit from the new
features and enhancements each new release brings.

David J.


Re: [GENERAL] Finally upgrading to 9.6!

2017-10-18 Thread David G. Johnston
On Wed, Oct 18, 2017 at 8:16 AM, Igal @ Lucee.org <i...@lucee.org> wrote:

> On 10/18/2017 7:45 AM, Ron Johnson wrote:
>
> On 10/18/2017 09:34 AM, Igal @ Lucee.org wrote:
>
> A bit off-topic here, but why upgrade to 9.6 when you can upgrade to
> 10.0?
>
>
> There's no way we're going to put an x.0.0 version into production.
>
>
> Then think of it as 9.7.0 but with an easier name to pronounce ;)
>

The OP likely intended to say "x.0" version; which a "[9.7].0" version is
just the same as a [10].0 version

The contributors do an excellent job but the reality of this community is
that a critical mass of people do not start seriously testing and using a
new version until it is officially released.  The first couple of bug-fix
releases are thus, unfortunately, likely to be non-trivial as the masses
flex the system at scales and using workloads that were not known or
available to the developers.  Its a balancing act for most and falling on
the side of waiting for a few point releases before promoting to production
is, I suspect, common.

David J.


Re: [GENERAL] Problems with the time in data type timestamp without time zone

2017-10-18 Thread David G. Johnston
On Wed, Oct 18, 2017 at 8:21 AM, américo bravo astroña <
americobr...@gmail.com> wrote:

> Hi,
>
> I have a program that saves information in a DB Postgresql need to extract
> data from date and time of that DB but when I retrieve the date and time
> information is always ahead 3 hours, the type of data that has that field
> is timestamp without time zone,
>
> Please forgive my english I'm using translator.
>

​A minimal SQL example of your problem would help.

David J.​


Re: [GENERAL] Simple query fail

2017-10-17 Thread David G. Johnston
On Tue, Oct 17, 2017 at 2:29 PM, Glenn Pierce <glennpie...@gmail.com> wrote:


> and I have a simple query that fails
>

​This is not failure, this is a query that found zero matching records.


>
> Ie
>
> SELECT sensor_id, MAX(ts), date_trunc('day', ts), COALESCE(MAX(value),
> 'NaN')::float FROM sensor_values_days WHERE ts > '2017-10-06
> 00:01:01+00' AND ts < '2017-10-06 23:59:59+00' GROUP BY 1, 3 ORDER BY
> 1, 2;
>  sensor_id | max | date_trunc | coalesce
> ---+-++--
> (0 rows)
>
>
> If I remove the timezone part of the start date I get results.
>
> Ie
>
>
>  ts > '2017-10-06 00:01:01'
>
> 597551 | 2017-10-06 01:00:00+01 | 2017-10-06 00:00:00+01 |13763
>
> I'm sure I am doing something silly but can't see what.

Does anyone know what is going on here ?
>

​The "max(ts)" result indicates a time of midnight, the 6th, GMT

ts > '2017-10-06 00:01:01+01' equates to > '2017-10-05 23:01:01+00'  of
which midnight, the 6th, GMT is indeed more recent

ts >  '2017-10-06 00:01:01+00' is 12:01:01 on the 6th, GMT, of which
midnight GMC, the 6th is NOT more recent

David J.


Re: [GENERAL] Using Substitution Variables In PostgreSQL

2017-10-16 Thread David G. Johnston
On Mon, Oct 16, 2017 at 7:08 AM, Osahon Oduware <osahon@gmail.com>
wrote:

> Hi All,
>
> I wanted to find out how to use a substitution variable in an SQL
> statement that would cause the user to be prompted for a value. Something
> similar to the ampersand (&&) in ORACLE.
>
> For example, given the SQL statement below:
> SELECT ,,
> FROM 
> WHERE  = 35
>
> I want the user to be prompted for the value in the WHERE (filter) clause,
> e.g.
> SELECT ,,
> FROM 
> WHERE  = ?
>
> I would be glad if someone could point me in the right direction.
>

​That would be a client-side feature.  The only client supported on this
list is psql.  psql does not have this capability.  It does have the
"\prompt" ​meta-command which will serve in this specific case.

\prompt 'Enter a value for varname' varname

SELECT ... WHERE  = :'varname';

https://www.postgresql.org/docs/10/static/app-psql.html

David J.


Re: [GENERAL] REASSIGN OWNED simply doesn't work

2017-10-13 Thread David G. Johnston
On Fri, Oct 13, 2017 at 6:04 AM, Alvaro Herrera <alvhe...@alvh.no-ip.org>
wrote:

> Sam Gendler wrote:
> > psql 9.6.3 on OS X.
> >
> > I'm dealing with a production database in which all db access has been
> made
> > by the same user - the db owner, which isn't actually a superuser because
> > the db runs on amazon RDS - amazon retains the superuser privilege for
> its
> > own users and makes non-superuser role with createrole and createdb
> > privileges for use as the primary role by the AWS account.
>
> It's true that REASSIGN OWNED is limited to a very particular scenario.
> It was written to support the specific case of wanting to drop a role,
> and that can only be done by a superuser, so why would it matter that
> REASSIGN OWNED itself could not be run by a superuser?
>

​You could at least fix the documentation bug since this superuser-only
restriction doesn't show up and is in fact contradicted by the sentence
​"REASSIGN OWNED requires privileges on both the source role(s) and the
target role."  The error message that comes back seems like it could be
improved as well.

The word "privileges" there seems odd too, wouldn't "membership" be more
appropriate?

https://www.postgresql.org/docs/10/static/sql-reassign-owned.html

David J.


Re: [GENERAL] Permissions for Web App

2017-10-13 Thread David G. Johnston
On Fri, Oct 13, 2017 at 11:03 AM, Igal @ Lucee.org <i...@lucee.org> wrote:

> You mean that if I execute the ALTER DEFAULT command above as user
> `postgres` then only tables created by user `postgres` will give default
> privileges to role `webapp`?
>

​Yes.  "​You can change default privileges only for objects that will be
created by yourself or by roles that you are a member of."

You use the "FOR " modifier if you want the "or by roles that
you are a member of" portion to be used.

David J.


Re: [GENERAL] "Shared strings"-style table

2017-10-13 Thread David G. Johnston
On Fri, Oct 13, 2017 at 9:29 AM, Seamus Abshere <sea...@abshere.net> wrote:

> > On Fri, Oct 13, 2017 at 8:49 AM, Seamus Abshere wrote
> > > Theoretically / blue sky, could there be a table or column type that
> > > transparently handles "shared strings" like this, reducing size on disk
> > > at the cost of lookup overhead for all queries?
> > > (I guess maybe it's like TOAST, but content-hashed and de-duped and not
> > > only for large objects?)
>
> On Fri, Oct 13, 2017, at 01:12 PM, David G. Johnston wrote:
> > Row-independence is baked into PostgreSQL pretty deeply...
>
> Could you say more about that?
>

​Not intelligibly...basically as far as PostgreSQL is concerned all the
data to reconstruct a row from a given table is present in that table.
>From a practical perspective the "TOAST table" for a table IS part of the
main table since it has no practical independent use.

As an aside I was thinking along the lines of an actual compression routine
which is what a spreadsheet file is able to do since a spreadsheet contains
the data from every row and column in a single file and is able to compress
the entire file by finding commonalities across rows and columns.  A
database generally cannot do that.

As for "transparent lookup tables for text columns"...I suppose one could
implement a "system-managed-enum" type with many of the same properties of
an actual enum but avoiding many of its problems by not exposing the
enum-ness to the user and instead just exposing the text labels...I suspect
faced with prospect of doing something that complex most users would just
setup a FK relationship.
​

> What about the comparison to TOAST, which stores values off-table?
>

TOAST solves a technical problem related to the fact that records "on the
table" have a very small size limitation (kb) while stored values can be at
least as large as a GB.  TOAST does involved compression but the input to
the compression algorithm is a single cell (row and column) in a table.​
As noted above I consider the TOAST table and main table to be a single
logical table.

Like I said the enum type has similar properties to what you want - but
Melvin is right that using it requires careful consideration of how your
data might change in the future.

David J.


Re: [GENERAL] "Shared strings"-style table

2017-10-13 Thread David G. Johnston
On Fri, Oct 13, 2017 at 8:49 AM, Seamus Abshere <sea...@abshere.net> wrote:

> Theoretically / blue sky, could there be a table or column type that
> transparently handles "shared strings" like this, reducing size on disk
> at the cost of lookup overhead for all queries?
>
> (I guess maybe it's like TOAST, but content-hashed and de-duped and not
> only for large objects?)
>

Row-independence is baked into PostgreSQL pretty deeply...

I think an enum type is about as close are you are likely to get if you
don't wish to setup your own foreign-key relationships with surrogate keys.

David J.


Re: [GENERAL] Equivalence Classes when using IN

2017-10-11 Thread David Rowley
On 12 October 2017 at 10:15, Kim Rose Carlsen <k...@hiper.dk> wrote:
> Why don't I see that predicate (customer_id) pushed into the outer nested 
> loop so we don't have to sort the whole table on each loop.
>
> (See original post and follow up for definitions)
> QUERY PLAN
> -
> Nested Loop Left Join  (cost=139.00..10392.96 rows=668 width=16) (actual 
> time=0.528..35.120 rows=200 loops=1)
>   Join Filter: (c.customer_id = product.customer_id)
>   Rows Removed by Join Filter: 199900
>   ->  Nested Loop  (cost=0.28..199.21 rows=334 width=12) (actual 
> time=0.075..1.146 rows=100 loops=1)
> ->  Seq Scan on customer  (cost=0.00..21.51 rows=334 width=8) (actual 
> time=0.067..0.282 rows=100 loops=1)
>   Filter: (age < 20)
>   Rows Removed by Filter: 901
> ->  Index Only Scan using customer_pkey on customer c  
> (cost=0.28..0.53 rows=1 width=4) (actual time=0.006..0.006 rows=1 loops=100)
>   Index Cond: (customer_id = customer.customer_id)
>   Heap Fetches: 100
>   ->  Materialize  (cost=138.73..173.75 rows=2001 width=8) (actual 
> time=0.005..0.130 rows=2001 loops=100)
> ->  Sort  (cost=138.73..143.73 rows=2001 width=8) (actual 
> time=0.448..0.588 rows=2001 loops=1)
>   Sort Key: product.customer_id, product.product_id
>   Sort Method: quicksort  Memory: 142kB
>   ->  Seq Scan on product  (cost=0.00..29.01 rows=2001 width=8) 
> (actual time=0.006..0.215 rows=2001 loops=1)
> Planning time: 0.214 ms
> Execution time: 35.284 ms

I don't really see any blockers that would mean we couldn't support
this, it's just that we don't currently support it. The predicates
that we do pushdown are just ones we deem as safe to pushdown of the
ones that appear in the query, or ones that can be derived through
equivalence. (e.g. ab.a = ab.b and ab.b = 1 --> ab.a = 1)

For example, consider the difference between the following:

create table ab(a int, b int);
insert into ab select x,x from generate_series(1,100)x;
create index on ab(a);
create index on ab(b);

postgres=# explain select * from (select distinct on (a) a,b from ab
order by a,b) ab where ab.b < 10;
QUERY PLAN
---
 Subquery Scan on ab  (cost=127757.34..145257.34 rows=33 width=8)
   Filter: (ab.b < 10)
   ->  Unique  (cost=127757.34..132757.34 rows=100 width=8)
 ->  Sort  (cost=127757.34..130257.34 rows=100 width=8)
   Sort Key: ab_1.a, ab_1.b
   ->  Seq Scan on ab ab_1  (cost=0.00..14425.00
rows=100 width=8)
(6 rows)


postgres=# explain select * from (select distinct on (a) a,b from ab
order by a,b) ab where ab.a < 10;
  QUERY PLAN
---
 Unique  (cost=8.73..8.77 rows=9 width=8)
   ->  Sort  (cost=8.73..8.75 rows=9 width=8)
 Sort Key: ab.a, ab.b
 ->  Index Scan using ab_a_idx on ab  (cost=0.42..8.58 rows=9 width=8)
   Index Cond: (a < 10)
(5 rows)

The "a < 10" was pushed down as we're distinct on (a), but pushing
down "ab.b < 10" would be invalid and could cause wrong results.

The predicate you'd like to see pushed down is actually a parameter in
a parameterized Path and we don't currently generate any parameterized
paths outside of each query level. Likely there's no good reason for
this other than it's not been done yet, but it's really only been
since 9.6 that the query planner has been flexible enough to possibly
allow something like this to be done at all.

The reason the planner may appear to push down the predicate when
there's no DISTINCT ON clause is that the planner was able to pull the
subquery (or view) up a level. When the planner is able to do this
it's much more flexible to the types of plans it can generate. It's
just that we don't ever pull up subqueries with DISTINCT ON, plus a
bunch of other reasons.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Equivalence Classes when using IN

2017-10-11 Thread David Rowley
On 12 October 2017 at 08:37, Kim Rose Carlsen <k...@hiper.dk> wrote:
>
>> Yeah.  The ORDER BY creates a partial optimization fence, preventing
>> any such plan from being considered.
>>>
>
> I can see in the general case it semanticly means different things If you 
> allow the WHERE to pass through ORDER BY.
>
> A special case can be allowed for WHERE to pass the ORDER BY if the column is 
> part of DISTINCT ON.

Yeah, we do allow predicates to be pushed down in that case.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [asking about how to upgrade docker postgresql without losing the data]

2017-10-10 Thread David G. Johnston
On Tue, Oct 10, 2017 at 4:25 AM, Olivani Prisila <olivan...@gmail.com>
wrote:

> Hi,
>
> I am beginner both of docker and postgresql.
>
> How do i upgrade docker postgresql 9.5 into 9.6 without losing my
> current database?
> fyi: im using ubuntu verison 14 and docker 17.09
>

​More of a Docker forum question than PostgreSQL.  It depends on whether
you setup a distinct data container and/or mounted a host location into the
container to store the data.  In that case you should be able to link that
external dependency into the new runtime container in the same manner as it
is linked into the existing one.

If the data is directly within the runtime container it will be
considerably more difficult - though pg_dump/pg_restore might prove easiest
to accomplish.

David J.
​


Re: [GENERAL] Equivalence Classes when using IN

2017-10-09 Thread David Rowley
On 10 October 2017 at 12:44, Tom Lane <t...@sss.pgh.pa.us> wrote:
> David Rowley <david.row...@2ndquadrant.com> writes:
>> If the only reason that is_simple_subquery() rejects subqueries with
>> ORDER BY is due to wanting to keep the order by of a view, then
>> couldn't we make is_simple_subquery() a bit smarter and have it check
>> if the subquery is going to be joined to something else, which likely
>> would destroy the order, or at least it would remove any guarantees of
>> it.
>
> I'm not on board with this.  The assumption is that if the user put an
> ORDER BY there, that means they want that subquery to be computed in that
> order.  It's not for us to decide they didn't mean what they said.
>
> Moreover, there are cases where the ORDER BY would be semantically
> significant, eg if there's a LIMIT or volatile functions or tSRFs
> involved.

Ok, thanks for looking, although, FWIW, LIMIT and tSRFs are still disabled.


-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Equivalence Classes when using IN

2017-10-09 Thread David Rowley
On 10 October 2017 at 02:51, Tom Lane <t...@sss.pgh.pa.us> wrote:
> David Rowley <david.row...@2ndquadrant.com> writes:
>> It's pretty bad practice to have ORDER BY in views. I kinda wish we
>> didn't even allow it, but that ship sailed many years ago...
>
> I think it's actually disallowed by the SQL spec (although so are
> many other things we support).  IMO it's a useful facility to have
> for views that are meant for direct presentation to clients ---
> but if you'd like joins to the view to be optimized, you don't
> want an ORDER BY in there.

If the only reason that is_simple_subquery() rejects subqueries with
ORDER BY is due to wanting to keep the order by of a view, then
couldn't we make is_simple_subquery() a bit smarter and have it check
if the subquery is going to be joined to something else, which likely
would destroy the order, or at least it would remove any guarantees of
it.

Something like the attached?

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


pullup_subqueries_with_order_by_when_its_not_the_only_fromitem.patch
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Permissions for Web App

2017-10-09 Thread David G. Johnston
On Mon, Oct 9, 2017 at 9:44 AM, Igal @ Lucee.org <i...@lucee.org> wrote:

> But I want to give that role permissions on future tables since I add new
> tables and drop/recreate current ones.
>

​ALTER DEFAULT PRIVILEGES​

​https://www.postgresql.org/docs/9.6/static/sql-alterdefaultprivileges.html

David J.
​


Re: [GENERAL] pg_start/stop_backup naming conventions

2017-10-09 Thread David G. Johnston
On Mon, Oct 9, 2017 at 9:33 AM, mj0nes <matthew.jo...@ramtech.co.uk> wrote:

> Hi,
>
> I'm just starting out on a rolling backup strategy and the naming
> convention
> has thrown me slightly for the WAL and "backup_label" files.
>
 ​[...]​

> Thanks for any pointers.
>

​I'll give out the standard advice - don't roll-your-own backup solution -
investigate and choose one the many supported and maintained backup
solutions (most are open source) that are already available.  This kind of
detail, then, should become unimportant to you.

David J.


Re: [GENERAL] Equivalence Classes when using IN

2017-10-09 Thread David Rowley
On 9 October 2017 at 22:39, Kim Rose Carlsen <k...@hiper.dk> wrote:
> EXPLAIN ANALYZE
>  SELECT *
>FROM customer
>JOIN view_customer
>  ON customer.customer_id = view_customer.customer_id
>   WHERE age < 20;
>
>  QUERY PLAN
> -
>  Nested Loop Left Join  (cost=139.00..10392.96 rows=668 width=16) (actual 
> time=0.528..35.120 rows=200 loops=1)
>Join Filter: (c.customer_id = product.customer_id)
>Rows Removed by Join Filter: 199900
>->  Nested Loop  (cost=0.28..199.21 rows=334 width=12) (actual 
> time=0.075..1.146 rows=100 loops=1)
>  ->  Seq Scan on customer  (cost=0.00..21.51 rows=334 width=8) 
> (actual time=0.067..0.282 rows=100 loops=1)
>Filter: (age < 20)
>Rows Removed by Filter: 901
>  ->  Index Only Scan using customer_pkey on customer c  
> (cost=0.28..0.53 rows=1 width=4) (actual time=0.006..0.006 rows=1 loops=100)
>Index Cond: (customer_id = customer.customer_id)
>Heap Fetches: 100
>->  Materialize  (cost=138.73..173.75 rows=2001 width=8) (actual 
> time=0.005..0.130 rows=2001 loops=100)
>  ->  Sort  (cost=138.73..143.73 rows=2001 width=8) (actual 
> time=0.448..0.588 rows=2001 loops=1)
>Sort Key: product.customer_id, product.product_id
>Sort Method: quicksort  Memory: 142kB
>->  Seq Scan on product  (cost=0.00..29.01 rows=2001 width=8) 
> (actual time=0.006..0.215 rows=2001 loops=1)
>  Planning time: 0.214 ms
>  Execution time: 35.284 ms

You would benefit from adding the age column to view_customer, or at
least consider having some view which contains all the columns you'll
ever need from those tables and if you need special views with only a
subset of columns due to some software doing "select * from
viewname;", then you could just create some. Joining to the same table
again seems like a bit of a waste of effort for the planner and
executor.  I'd assume customer_id is the PRIMARY KEY of customer and
is unique.

It's not all that clear what your view is doing here. Confusingly
there's a Sort in the plan, yet nothing in the query asked for that,
so I guess that the view must have an ORDER BY. If you get rid of that
the planner would likely use an index on product (customer_id) to
parameterise the nested loop, at least, it likely would, if you have
one.

It's pretty bad practice to have ORDER BY in views. I kinda wish we
didn't even allow it, but that ship sailed many years ago...

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] OR-clause support for indexes

2017-10-09 Thread David Rowley
On 8 October 2017 at 21:30, Andreas Joseph Krogh <andr...@visena.com> wrote:
> There was a while ago a proposed patch for adding $subject;
> https://commitfest.postgresql.org/8/454/

That looks like it's been abandoned, but perhaps it's worth asking the
author directly?

> Is this being worked on? Any progress in btree-support?

Not exactly what you're asking, but perhaps
https://commitfest.postgresql.org/14/1001/ could improve your
workload, or perhaps you could just manually rewrite the query.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Equivalence Classes when using IN

2017-10-08 Thread David Rowley
On 9 October 2017 at 08:01, Kim Rose Carlsen <k...@hiper.dk> wrote:
> Is this because postgres never consider IN clause when building equivalence
> class's?

Only btree equality operators are considered at the moment.

> Are there any interests in adding such rule?

There's been some discussion on it previously, although there is lots
to still be worked out, for example, it's not that clear if it will
always be a win to always apply the qual.

There are more details of the discussion in [1], although there's
probably lots more threads to be found if you search the archives.

[1] 
https://www.postgresql.org/message-id/flat/CAKJS1f9FK_X_5HKcPcSeimy16Owe3EmPmmGsGWLcKkj_rW9s6A%40mail.gmail.com#cakjs1f9fk_x_5hkcpcseimy16owe3empmmgsgwlckkj_rw9...@mail.gmail.com

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] a JOIN to a VIEW seems slow

2017-10-07 Thread David Rowley
On 7 October 2017 at 22:34, Frank Millman <fr...@chagford.com> wrote:
> 4. Select from ar_trans_due including join to ar_trans,
> plus sub_select from ar_trans_alloc including join to ar_trans
> SELECT *,
> (SELECT SUM(c.alloc_cust)
> FROM ar_trans_alloc c
> LEFT JOIN ar_trans d
>ON d.tran_type = c.tran_type
>AND d.tran_row_id = c.tran_row_id
> WHERE c.due_row_id = a.row_id)
> FROM ar_trans_due a
> LEFT JOIN ar_trans b
>ON b.tran_type = a.tran_type
>AND b.tran_row_id = a.tran_row_id
>
> Sql Server: 1.01 sec; PostgreSQL 1683 sec

Yeah, PostgreSQL does not make any effort to convert subqueries in the
target list into joins. SQL server does.

The way you have written the query might be good if there are not so
many rows in the outer part of the query, however, as the number of
rows increases then performance will get worse pretty quickly.

You'll probably find it'll run faster if you convert the subquery in
the target list into a join with a GROUP BY, like:

SELECT a.*,b.*,c.sum_alloc_cust
FROM ar_trans_due a
LEFT JOIN ar_trans b
ON b.tran_type = a.tran_type
AND b.tran_row_id = a.tran_row_id
LEFT JOIN (SELECT c.due_row_id,
SUM(c.alloc_cust) AS sum_alloc_cust
FROM ar_trans_alloc c
LEFT JOIN ar_trans d
ON d.tran_type = c.tran_type
AND d.tran_row_id = c.tran_row_id
GROUP BY c.due_row_id
) c ON c.due_row_id = a.row_id;

SQL Server will probably be doing this rewrite.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Functions and Parentheses

2017-10-06 Thread David G. Johnston
On Fri, Oct 6, 2017 at 2:18 PM, Igal @ Lucee.org <i...@lucee.org> wrote:

> Hi,
>
> Is current_date a function?  It's a bit puzzling to me since there are no
> parentheses after it, i.e.
>
>   SELECT current_date;
>
> And not
>
>   SELECT current_date();  -- syntax error
>
​
It, and the others like it, behave as functions.  They don't require
parentheses because the SQL standard​
​
​ defines them without parentheses.

> How come `current_date` has no parenthesis but `clock_timestamp()` does?
>
clock_timestamp isn't standard defined and unless the standard forces us to
do otherwise function invocation requires parentheses.

See 9.9.4 (
https://www.postgresql.org/docs/9.6/static/functions-datetime.html )

David J.


Re: [GENERAL] Setting search_path ignored

2017-10-02 Thread David G. Johnston
On Mon, Oct 2, 2017 at 7:09 AM, Guyren Howe <guy...@gmail.com> wrote:

> I logged out and back and did SET ROLE and got the same resullt.
>

​
Are you logging in as "thing_accessor" or some role that is a member of
"thing_accessor"?

David J.

​


Re: [GENERAL] Setting search_path ignored

2017-10-02 Thread David G. Johnston
On Mon, Oct 2, 2017 at 7:00 AM, Guyren Howe <guy...@gmail.com> wrote:

> CREATE ROLE thing_accessor;
>
> CREATE ROLE
>
> CREATE SCHEMA thing_accessor;
>
> CREATE SCHEMA
>
> covermything=> ALTER ROLE thing_accessor SET search_path=thing_accessor;
>
> ALTER ROLE
>
> covermything=# SET ROLE thing_accessor;
>
> SET
>
> covermything=> SHOW search_path;
>
>search_path
>
> -
>
>  "$user", public
>
> (1 row)
>
>
>
> This seems to contradict all the advice I can find about setting the
> schema search path.
>

​Settings associated with roles only take affect at session start/login.
 SET ROLE does not cause them to be read/executed.

https://www.postgresql.org/docs/9.6/static/config-setting.html

"Values set with ALTER DATABASE and ALTER ROLE are applied only when
starting a fresh database session."

David J.
​


Re: [GENERAL] Plan changes from index scan to seq scan after 5 executions

2017-09-30 Thread David G. Johnston
On Sat, Sep 30, 2017 at 10:57 AM, Alexander Kukushkin <cyberd...@gmail.com>
wrote:

> Hi,
>
> Recently I've been investigating a strange behavior of one stored
> procedure.
>

Please provide the output of:

SELECT version();

David J.
​


Re: [GENERAL] COPY vs \COPY FROM PROGRAM $$ quoting difference?

2017-09-29 Thread David G. Johnston
On Fri, Sep 29, 2017 at 9:27 AM, Alexander Stoddard <
alexander.stodd...@gmail.com> wrote:

> I found what seems to be an odd difference between COPY and \copy parsing.
>
​[...]
​


> COPY dest_table FROM PROGRAM $$ sed 's/x/y/' | etc... $$
>
> To my surprise this worked with COPY but not \COPY which failed with:
> \copy: parse error at "$$"
>
> Is this an undocumented difference? Is this even the appropriate email
> list to ask this kind of question or report such a difference?
>

​This is the correct place for seeking such clarification.​  The docs
cannot cover every possible thing people might do and these lists (-general
in particular) are here to fill in the gaps.

​The negative condition that "psql" itself doesn't understand
dollar-quoting​ is not documented.  Dollar-quoting is documented as a
server-interpreted SQL Syntax feature and only applies there.

While the commands are similar COPY is server-side SQL while \copy is a
psql meta-command that psql converts to SQL, executes, obtains the results,
and processes.  Note that the server would never see "PROGRAM $$" since the
server would be unable to access the local program being referred to.  The
server sees "FROM stdin" and psql feeds the results of the PROGRAM
invocation to the server over that pipe.

David J.


Re: [GENERAL] Catching errors inside a LOOP is causing performance issues

2017-09-28 Thread David G. Johnston
On Thu, Sep 28, 2017 at 1:08 AM, Denisa Cirstescu <
denisa.cirste...@tangoe.com> wrote:

> Hi Tom,
>
> You said that trapping an *arbitrary* exception is a “fairly expensive
> mechanism”.
>
​I suppose a better (though maybe not perfectly accurate) wording is that
setting up the pl/pgsql execution layer to trap "arbitrary SQL-layer
exceptions"​ is fairly expensive.  Even if the user specifies specific
errors the error handling mechanism in pl/pgsql is code for generic
(arbitrary) errors being given to it.

David J.


Re: [GENERAL] pg_upgrade?: Upgrade method from/to any version on random OS?

2017-09-27 Thread David G. Johnston
On Wed, Sep 27, 2017 at 12:48 PM, Hans Schou <hans.sc...@gmail.com> wrote:

> I have looked through
> https://www.postgresql.org/docs/9.6/static/pgupgrade.html
> but it seems more complicated than necessary.
>

​[perform dump/restore]​

It went very good but took 100 minutes - where we had downtime - not so
> good.
>

​There is a correlation between the decreased downtime that pg_upgrade
facilitates and its complexity.​

If you cannot afford the 2 hours of downtime for dump/restore then likely
learning the pg_upgrade process would be the better choice for you.

David J.


Re: [GENERAL] Catching errors inside a LOOP is causing performance issues

2017-09-27 Thread David G. Johnston
On Mon, Sep 25, 2017 at 9:13 AM, Denisa Cirstescu <
denisa.cirste...@tangoe.com> wrote:

>
> Can someone please explain to me why this worked?
>
> What happened behind the scenes?
>
> I suspect that when you catch exceptions inside of a LOOP and the code
> ends up generating an exception, Postgres can’t use cached plans to
> optimize that code so it ends up planning the code at each iteration and
> this causes performance issues.
>
> Is my assumption correct?
>
>
>
​Not sure how much detail you are looking for but the docs say this:

"​Tip: A block containing an EXCEPTION clause is significantly more
expensive to enter and exit than a block without one. Therefore, don't use
EXCEPTION without need."

https://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING

I'm somewhat doubting "plan caching" has anything to do with this; I
suspect its basically that there is high memory and runtime overhead to
deal with the possibilities of needing to convert a exception into a branch
instead of allowing it to be fatal.

David J.


Re: [GENERAL] hard parse?

2017-09-21 Thread David G. Johnston
On Thu, Sep 21, 2017 at 5:48 AM, Peter Koukoulis <pkoukou...@gmail.com>
wrote:

> Hi
>
> I have a query where a filter would always be negative, how many steps,
> out these:
>
>- parsing and syntax check
>- semantic analysis
>- transformation process (query rewrite based on system or
>user-defined rules)
>- query optimization
>- execution
>
> would be performed or not? Also, where in the documentation can I found
> out which of the above phases would be performed?
>
> For example, for a query such as the following:
>
> select x,y from test1 where 1=0;
>
>
I'm inferring behavior here but...​

​All of them.  You are still going to get a result set with zero records
and the correct column structure.  i.e., "Execution".  None of the other
stuff can be skipped in getting to engine to that point.  With a "always
false" predicate and that simple of a query structure most of the other
stuff, including execution, is probably performed is seemingly zero time
but it still has to work through that step of the process - if nothing else
than to move through an if-branch to decide that nothing material needs to
be done.

David J.
​


Re: [GENERAL] Insert large number of records

2017-09-20 Thread David G. Johnston
On Tuesday, September 19, 2017, Job <j...@colliniconsulting.it> wrote:

> and would not care about table partitioning (COPY command fire
> partitioned-table triggers).


You might want to write a script that inserts directly into the partitions
and bypass routing altogether.

Insert into ... select from ... is your only option for table-to-table and
you are stuck with whatever locks the execution of the command needs to
take.

David J.


Re: [GENERAL] VM-Ware Backup of VM safe?

2017-09-20 Thread David Steele
On 9/20/17 7:36 AM, PT wrote:
> On Wed, 20 Sep 2017 10:03:15 +0200
> Thomas Güttler <guettl...@thomas-guettler.de> wrote:
> 
>> We run a PostgreSQL 9.6 server in a virtual machine.
>>
>> The virtual machine is managed by the customer.
>>
>> He does backup the VM.
>>
>> Is this enough, is this safe?
> 
> There are so many variables involved with doing that ... I don't think
> anyone can reliably answer that question.
> 
> I recommend you put together a periodic test schedule where you restore
> a machine from the backup and ensure everything works. To be honest, you
> should be doing that anyway.

Restore testing is as must, but a bad backup scheme can result in subtle
errors that are very hard to detect.

If you can't find specific documentation that your VM backup solution is
safe to use with a DBMS then it is almost certainly not safe.  Even if
it says it is there are potential gotchas.  For example, the backup may
not be consistent if you are using multiple volumes.

In addition, data loss on restore will be greater if there is no WAL
archive to play forward from.

-- 
-David
da...@pgmasters.net


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] reload postgresql with invalid pg_hba.conf

2017-09-18 Thread David G. Johnston
On Mon, Sep 18, 2017 at 12:36 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> jotpe <jo...@posteo.de> writes:
> > A system administration applied an invalid line into the pg_hba.conf
> > file and called "service postgresql reload". Since that command doesn't
> > return any error and leaves with exit code 0 it seams that new
> > configuration was applied.
>
> > Of course postgresql added a warning to the log file: "pg_hba.conf not
> > reloaded".
> > But even when I execute SELECT pg_reload_conf(), true is returned.
>
> > Is this the desired behavior?
>
> I wouldn't say it's desired behavior, exactly, but there's no very
> good way to improve it.  pg_ctl has no visibility into what the postmaster
> is thinking.
>

The function signature (and docs) for pg_reload_conf makes the OPs
interpretation understandable.  The docs and intuition would lead one to
assume that "true" means the reload was successful and "false" means it was
not (possibly with reasons emitted as notices/warnings/errors).​  But all
it tells us is whether a signal "was sent" and not whether it was
successfully acted upon by the other party.

Something like the attached makes this dynamic stand out.

https://www.postgresql.org/docs/9.6/static/functions-admin.html#FUNCTIONS-ADMIN-SIGNAL

Cancel and terminate backends do have conditional return values so those
two seem OK.

pg_rotate_logfile() I suppose might want to be clarified here as well
though I suspect it is less prone to complications that pg_reload_conf is.

The description of pg_ctl doesn't lead to the same kind of assumptions
being made (i.e., it only speaks of sending a signal and has no declared
return value) though maybe an additional sentence wouldn't hurt...?

The docs on configuring the server to speak to this overall flow and what
happens when invalid values are encountered (i.e., they are ignored)

https://www.postgresql.org/docs/current/static/config-setting.html#CONFIG-INCLUDES

David J.
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 641b3b8f4e..9ac7bf1e87 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -18373,7 +18373,7 @@ SELECT set_config('log_statement_stats', 'off', false);
 pg_reload_conf()
 
boolean
-   Cause server processes to reload their configuration 
files
+   Cause server processes to attempt reloading their configuration 
files
   
   

@@ -18420,7 +18420,10 @@ SELECT set_config('log_statement_stats', 'off', false);

 pg_reload_conf sends a SIGHUP signal
 to the server, causing configuration files
-to be reloaded by all server processes.
+to be reloaded by all server processes.  The return value only indicates
+whether the signal was sent successfully to the postmaster.  It does not
+say whether the reload was performed and propogated successfully by
+the postmaster.

 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Selecting a daily puzzle record - which type of column to add?

2017-09-17 Thread David G. Johnston
On Sun, Sep 17, 2017 at 1:13 PM, Alexander Farber <
alexander.far...@gmail.com> wrote:

>
> I need to add a useful column, so that it would be easy to me to create a
> web script which would display today's and all past "daily puzzle" records
> - and wouldn't change the already published puzzles...
>
>
​Serial (i.e. integer/bigint with an sequence generator).  I would then add
another table that simply stores the "last manually reviewed id" as its
only record (or you could record manual reviews and take the max of that
field from the table).

David J.


Re: [GENERAL] Remove useless joins (VARCHAR vs TEXT)

2017-09-16 Thread David Rowley
On 17 September 2017 at 08:07, Kim Rose Carlsen <k...@hiper.dk> wrote:
> It seems there are some difference in VARCHAR vs TEXT when postgres tries to
> decide if a LEFT JOIN is useful or not. I can't figure out if this is
> intentional because there are some difference between TEXT and VARCHAR that
> I dont know about or if it's a bug.
>
>
> I would expect both examples to produce same query plan
>
>
> a)
>
> create table a (id varchar primary key);
> create table b (id varchar primary key);
>
> explain   select a.*
>  from a
> left join (select distinct id from b) as b
>on a.id = b.id;
>
>
> QUERY PLAN
> --
>  Hash Right Join  (cost=67.60..113.50 rows=1360 width=32)
>Hash Cond: ((b.id)::text = (a.id)::text)
>->  HashAggregate  (cost=27.00..40.60 rows=1360 width=32)
>  Group Key: b.id
>  ->  Seq Scan on b  (cost=0.00..23.60 rows=1360 width=32)
>->  Hash  (cost=23.60..23.60 rows=1360 width=32)
>  ->  Seq Scan on a  (cost=0.00..23.60 rows=1360 width=32)
> (7 rows)
>
> b)
>
> create table a (id text primary key);
>
> create table b (id text primary key);
>
> explain   select a.*
>  from a
> left join (select distinct id from b) as b
>on a.id = b.id;
>
>   QUERY PLAN
> --
>  Seq Scan on a  (cost=0.00..23.60 rows=1360 width=32)

Yeah, it looks like the code to check for distinctness in the subquery
fails to consider that the join condition may contain RelabelTypes
instead of plain Vars.

The join would be removed if you'd written:

explain select a.* from a left join b on a.id = b.id;

so really the subquery version should be too.

I'm undecided if this should be classed as a bug or just a missed
optimisation. Certainly, the original code should have done this, so
I'm leaning slightly towards this being a bug.

The attached fixes.

(CC'd -hackers since we're starting to discuss code changes. Further
discussion which includes -hackers should drop the general list)

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


join_removal_subquery_fix.patch
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] looking for a globally unique row ID

2017-09-14 Thread David G. Johnston
On Thu, Sep 14, 2017 at 12:45 AM, Rafal Pietrak <ra...@ztk-rp.eu> wrote:

> Hello everybody,
>
> Can anybody help me find a way to implement an ID which:
>
> 1. guarantees being unique across multiple tables.
>
> 2. guarantees its uniqueness not only during INSERT, but also during the
> lifetime of the database/application (e.i. during future UPDATES).
>
> 3. guarantees persistence of value across database backup/restore/upgrade.
>
> an obvious candidate - a single SERIAL() (same serial) used in every
> table that needs that ID does not guarantee (2).
>

​I don't see how PostgreSQL can provide an absolute guarantee here.  As
others have been saying you can get very close, though.  Any of them have
the same basic property - you need to convert client SQL into "requests for
changes" and perform the actual changes within system-managed code while
restricting any possibility for applications to make those changes
themselves.  You can get a basic version of this using triggers.  Or you
can go all-out and write API functions for every one of these tables and
have the application perform CRUD only via these functions.  These
functions then, and not the application, would control key generation.​
 You disallow updating IDs and when inserting an ID you insert it into a
"id log" table that has a unique index on it and if that insertion succeeds
you can then associate it with the record being insert into the main
table.  You could setup an FK as well but it wouldn't provide the needed
guarantee.

Nothing will stop an administrator, or a user with incorrectly configured
permissions, from bypassing all of that but properly configured application
roles will be unable to do so.

David J.


Re: [GENERAL] Numeric numbers

2017-09-02 Thread David G. Johnston
On Saturday, September 2, 2017, Олег Самойлов <ol...@mipt.ru> wrote:
>
> There is only 20 "3" after ".". Well, may be this is not a problem, but
> why are they infinite number of "0" after the point? I can write even
>
> => select (1::numeric/3-0.)*1e10;
> ?column?
> 
>  0.
>
> Result the same. According to the docs: "Numeric values are physically
> stored without any extra leading or trailing zeroes."
>
>
What you see on the screen is not what is physically stored.  The
documentation about the numeric type on the server is accurate, but
apparently the psql application takes some liberties as to what it chooses
to display.   It's display rules for numeric don't seem to be user
documented or configurable.

David J,


Re: [GENERAL] Fields re-ordered on JOIN with * and USING

2017-09-01 Thread David G. Johnston
On Fri, Sep 1, 2017 at 2:25 PM, Ken Tanzer <ken.tan...@gmail.com> wrote:

> Hi.  I recently noticed that when doing a SELECT * with USING, that the
> join field(s) appear first in the output.  I'd never noticed that before,
> and was just curious if that is expected behavior or not.  Thanks.
>

​I don't recall if or where it is documented but it is intentional., as is
the documented fact that only one instance of the named column appears in
the output.

David J.


Re: [GENERAL] pgadmin - import a CSV with nulls?

2017-08-31 Thread David G. Johnston
On Thu, Aug 31, 2017 at 1:04 PM, George Neuner <gneun...@comcast.net> wrote:

> Does anyone know a way to do this reliably?
>

​The psql "\copy" meta-command should be capable of doing what you desire.

David J.​


Re: [GENERAL] Table create time

2017-08-31 Thread David G. Johnston
On Wednesday, August 30, 2017, <haman...@t-online.de> wrote:

>
> Hi,
>
> is there a way to add a table create (and perhaps schema modify) timestamp
> to the system?
>
>
There is not.  You may wish to search the archives for discussions as to
why previous requests for this feature have not resulted in patches.

David J.


Re: [GENERAL] Create Action for psql when NOTIFY Recieved

2017-08-28 Thread David G. Johnston
On Mon, Aug 28, 2017 at 6:42 PM, Jerry Regan <
jerry.re...@concertoglobalresources.com> wrote:

> Let’s suppose for a moment that I piped the output of a psql instance to
> awk or some similar program, configured to detect the NOTIFY. That program
> would then spawn a process to actually perform the work, parameters being
> whatever is part of the NOTIFY. Both this psql instance and the awk script
> would be dedicated to this task.
>
> Given this is not intended in any way to be production quality code - in
> fact, it’s intended to deliver XML to the client server for validation
> (xmllint) in a development/test environment - do you see anything that
> clearly won’t work?  Also, this would be a very low volume connection.
> Perhaps one NOTIFY in five minutes - or longer.
>
>
​I've been curious about having a long-running psql instance that could be
controlled by an external process (named pipes/fifos I believe).  It seems
like you might actually have a chance to get that working if you, 1, intend
to perform the notification polling automatically​ and, 2, don't wish to
lose any notifications (i.e. you must keep the psql process that issues
LISTEN running continuously).  It seems you'd actually need two of these
since you don't actually want the output to be sent to stdout or a normal
file but rather a file that is linked to the stdin of yet another long
running process.

I believe *nix provides sufficient tools but whether psql is written to a
sufficient level of compatibility to leverage them is something I don't
know and, last time I mentioned this though, got one other person in the
same boat (thoughtful but not worth the effort to investigate and R) and
no one speaking up to claim they've done it already.

Given your rough description I'm not seeing why you wouldn't just have the
trigger write a record out to some kind of journal table and poll that
table for new records whenever you wish instead of depending upon
LISTEN/NOTIFY.

David J.


Re: [GENERAL] Create Action for psql when NOTIFY Recieved

2017-08-28 Thread David G. Johnston
On Mon, Aug 28, 2017 at 1:28 PM, Jerry Regan <
jerry.re...@concertoglobalresources.com> wrote:

> My concern is how, after LISTENing in psql, I can tell it what to do when
> the NOTItFY is received.
>

​As far as I am aware you cannot.  The docs for psql, and its feature set,
with respect to LISTEN, are minimal and basically say psql will print out
notifications to stdout (this I'm forced to assume or read the code) and
will poll for notifications whenever it sends a query to the server.

https://www.postgresql.org/docs/current/static/app-psql.html

​"​Whenever a command is executed, psql also polls for asynchronous
notification events generated by LISTEN and NOTIFY."

I suspect the feature request would be something like:

\set NOTIFY_PROGRAM './process-notify-request.bash'  (or an equivalent
meta-command)

And psql would invoke said program and pass the content of the notification
payload to it via stdin.

Given what we allow for "\copy" I don't see any technical or conceptual
problems with such a feature.  Just needs someone to be its primary author.

David J.


Re: [GENERAL] DROP [TEMP] TABLE syntax, as reason why not?

2017-08-24 Thread David G. Johnston
On Wed, Aug 23, 2017 at 6:08 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> "David G. Johnston" <david.g.johns...@gmail.com> writes:
> > I'm wondering if there is anything technical preventing someone from
> making:
>
> > DROP TEMP TABLE tablename;
>
> There is no great need for that because you can get the semantics you're
> asking for with "DROP TABLE pg_temp.tablename".
>

Furthermore, as a matter of good database management ideally the role
creating and dropping temporary tables is different from the role that
retains ownership​ of permanent tables - so that even if the wrong table
was selected object permissions would prevent its being dropped.

I don't think I'd -1 a patch that attempted to provide some incremental
improvement here but I just don't see one being written or getting enough
support to go through.  And that doesn't bother me - even if I was in a
position to do so I likely wouldn't write one at this point.

David J.


Re: [GENERAL] 'value too long' and before insert/update trigger

2017-08-23 Thread David G. Johnston
On Wednesday, August 23, 2017, Kevin Golding <kgold...@axessgroup.com.au>
wrote:

> Presumably the length validation is being done before the trigger is run.
> Is there some way this could be changed so the trigger happens first?
>

The input tuple passed into the trigger is a valid record of the same type
as the table to which it is attached.  This means that while table
constraints are not enforced all column values must already be valid for
the defined column type.  I suspect changing this property of the system is
unlikely though I agree that I too have sometimes wished that a trigger
could be used to make invalid data, for a given type, valid.

Some hackish use of views, and triggers thereon, may provide an avenue to
gradual migration.

David J.


Re: [GENERAL] jdbc driver vis Release 10

2017-08-23 Thread David G. Johnston
On Wed, Aug 23, 2017 at 4:33 PM, Rob Sargent <robjsarg...@gmail.com> wrote:

> I see no mention of a new jdbc driver on the release notes for Beta 1.
> Does that mean there isn't one?


​Whose release notes?  PostgreSQL Server?  I don't believe the server
release notes ever talk about external projects.​

Its seems quite a few significant changes (including version numbering)
have happened to the JDBC project in the past year but you will need to go
directly to the project's site/GitHub to keep abreast of all of it.

David J.


[GENERAL] DROP [TEMP] TABLE syntax, as reason why not?

2017-08-23 Thread David G. Johnston
Hey all,

I'm wondering if there is anything technical preventing someone from making:

DROP TEMP TABLE tablename;

work.  Implementation wise the command would fail if a temporary table of
the given name doesn't exist.  Today, if a temporary table exists it will
be dropped, but if tablename is not schema-qualified and a table with the
same names exists in the search path it would be dropped.

Plus its symmetrical with CREATE [TEMP] TABLE.

David J.


Re: [GENERAL] Function not inserting rows

2017-08-23 Thread David G. Johnston
On Wed, Aug 23, 2017 at 8:23 AM, Frank Foerster <ivaypo...@gmail.com> wrote:

>
> sql = "select * from api_dev.add_texts_to_item( %s, %s ); x x"
> i get the following python-error:
> psycopg2.ProgrammingError: FEHLER:  Syntaxfehler bei »s«
> LINE 1: ...dd_texts_to_item( 1234, ARRAY['PSYCOPG1', 'PSYCOPG2'] ); s s
>
> But the created statement looks syntax-wise identical to the
> pgadmin-statement (except for the forced error of course):
>
> select * from api_dev.add_texts_to_item( 444, array['PGADM1', 'PGADM2'] );
>
>
Try adding the following to your function to see what your function sees as
being the value of the p_item_texts argument.

RAISE ERROR '%', p_item_texts;

I'm assuming that:

self.cur_.execute( sql, (doc_id, isins, ) )

performs dynamic string substitution as opposed to generating a prepared
statement.  The placeholder values supplied to a prepared statement are
treated as literals.

You may also want turn on statement logging in the server.

David J.


Re: [GENERAL] What is the proper query

2017-08-22 Thread David G. Johnston
On Mon, Aug 21, 2017 at 9:08 PM, Igor Korot <ikoro...@gmail.com> wrote:

> Hi, ALL,
> draft=# SELECT * FROM information_schema.key_column_usage
>
​[...]​


> There are 3 foreign keys in that table.
>
> Is there a way to get values of 0, 1, 1, and 2 for the ordinal position?
>

Not using the key_column_usage view.  What that view is doing is basically
saying (my understanding from reading the docs, not testing it out):

CREATE TABLE tbl_pk
UNIQUE (col1, col2)​

CREATE TABLE tbl_fk
FOREIGN (col2, col1) REFERENCES tbl_pk (col1, col2)

Now your ordinal/position rows would be:

(1, 2)
(2, 1)

instead of:

(1, 1)
(2, 2)

if you had defined the FK and PK with the same column names in the same
order, like is done almost always and like you did in your example.

If you want to enumerate constraints you need to use a different
information_schema view or, as Melvin showed, use pg_catalog.  I'm not
fluent enough to provide examples.  If you provide the question/problem you
are trying to resolve others will likely offer suggestions.

David J.


Re: [GENERAL] What is the proper query

2017-08-22 Thread David G. Johnston
On Tue, Aug 22, 2017 at 8:43 AM, Igor Korot <ikoro...@gmail.com> wrote:

> Or this is the bug in 9.1?
> Since it looks like there are 2 columns with the same info in 1
> table/view
>

​This old email thread sounds similar to what you are describing here.

​https://www.postgresql.org/message-id/56D0C4B8.7020200%40aklaver.com

David J.


Re: [GENERAL] Count column with name 'count' returns multiple rows. Why?

2017-08-18 Thread David G. Johnston
On Fri, Aug 18, 2017 at 1:47 PM, Peter J. Holzer <hjp-pg...@hjp.at> wrote:

> So apparently
> ​ ​
> columnname open-parenthesis tablename closed-parenthesis is a specific
> syntactic construct, but I can't find it documented anywhere.


​The documentation linked to speaks mainly in terms of "composite types".
A table IS a composite type (i.e., there is an implicit one of the same
name) for these purposes.​

David J.


Re: [GENERAL] Where is pg_hba.conf

2017-08-13 Thread David G. Johnston
On Sunday, August 13, 2017, Igor Korot <ikoro...@gmail.com> wrote:

> Also, I presume that the address in this file is the address of the
> machine where the server is located, not the address from where the
> connection is initiated.
>

Not according to the docs.

https://www.postgresql.org/docs/current/static/auth-pg-hba-conf.html

David J.


Re: [GENERAL] sequence used on null value or get the max value for a column whith concurrency

2017-08-10 Thread David G. Johnston
Thinking aloud...

On Thu, Aug 10, 2017 at 3:05 PM, marcelo <marcelo.nico...@gmail.com> wrote:

> In some table, I have a bigint column which at the app level can be null.
> Call it "DocumentNumber", and of course is not the PK.
> In most cases, the applications give some value to the column.
>
> But sometimes, the value remains null, expecting the backend or someone
> assign it a unique value.
>
> Could I use a sequence only when the field arrives to the backend as null?
> How? Using a triger?
>

Can you reserve a portion of the value range for auto-generated numbers
that application-assigned values will take on?  If so it would reasonably
simple to invoke nextval() in a trigger.

How could I get the max value for the column and increment it by one, but
> with concurrency warranty? Something as a table lock?
>

​One option would be to maintain the value in a separate table that you
update on insert using "UPDATE tbl SET col = col + 1 RETURNING col INTO
new_doc_num"

​You could probably make it an unlogged table as well and you'd return from
the trigger function with new_doc_num if its non-null otherwise you'd
branch and re-create the record before returning the just queried maximum +
1.

David J.


Re: [GENERAL] How to ALTER EXTENSION name OWNER TO new_owner ?

2017-08-09 Thread David G. Johnston
On Wed, Aug 9, 2017 at 12:26 PM, Melvin Davidson <melvin6...@gmail.com>
wrote:

> *>I'm am wondering whether "REASSIGNED OWNED" **needs fixing as well*
>
> *Possibly, but as the op is on 9.3, it is not available to him.*
>

​You should check the docs again...​


> *I would also argue that since* *"OWNER TO new_owner" is available in all
> other ALTER object statements, it is an omission and should be*
> *included for extenstions as well..*
>

​As am I, but omission or not I don't recall that we've ever back-patched
new SQL grammar.

David J.
​


Re: [GENERAL] How to ALTER EXTENSION name OWNER TO new_owner ?

2017-08-09 Thread David G. Johnston
On Wed, Aug 9, 2017 at 11:30 AM, Melvin Davidson <melvin6...@gmail.com>
wrote:

>
>
> On Wed, Aug 9, 2017 at 1:56 PM, David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Wed, Aug 9, 2017 at 10:37 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>>
>>> Scott Marlowe <scott.marl...@gmail.com> writes:
>>> > Seems like something that should be handled by alter doesn't it?
>>>
>>> I have some vague memory that we intentionally didn't implement
>>> ALTER EXTENSION OWNER because we were unsure what it ought to do
>>> about ownership of objects belonging to the extension.  If the answer
>>> is "nothing" then it wouldn't be hard to add such a statement.
>>>
>>
>> The documented contract of CREATE EXTENSION(1)/ALTER EXTENSION ADD
>> MEMBER(2) requires that the extension owner and the owner of the member
>> objects be one-and-the-same (I suppose the inclusion of DROP in (2) makes
>> this debatable).  I do not know what happens today if someone tries to
>> ALTER OBJECT SET OWNER on a member object to a role other than the owner of
>> the extension.  From the docs I'd suggest that it should fail.  Likewise,
>> ALTER EXTENSION OWNER should cascade to all members - which (3), and normal
>> dependency tracking, seems to make straight-forward.
>>
>> 1>The user who runs CREATE EXTENSION becomes the owner of the extension
>> for purposes of later privilege checks, as well as the owner of any objects
>> created by the extension's script.
>>
>> 2>You must own the extension to use ALTER EXTENSION. The ADD/DROP forms
>> require ownership of the added/dropped object as well.
>>
>> 3>CREATE EXTENSION additionally records the identities of all the created
>> objects, so that they can be dropped again if DROP EXTENSION is issued.
>>
>> David J.
>>
>>
>
>
> *David,*
>
> *The problem is, The current owner of the extension needs to be dropped.
> No one should have to jump through hoops*
> *just to be able to do that. There is definitely a need for an*
>
> *ALTER EXTENSION name OWNER TO new_owner.*
> *As Tom Lane has already pointed out, it would not be hard to add that.*
>
>
​I'm not sure what it is you think I'm missing here.  My only point was I'm
tending to think that "nothing", while workable, diverges from what I would
expect - that an extension and all of its member objects should, at all
times, share a common owner.  I don't imagine that either definition would
be abnormally difficult to implement for v11.

I'm am wondering whether "REASSIGNED OWNED" needs fixing as well...since
that command is specifically designed to handle this use case.

https://www.postgresql.org/docs/9.6/static/sql-reassign-owned.html
​
​

D
​avid J.
​


Re: [GENERAL] How to ALTER EXTENSION name OWNER TO new_owner ?

2017-08-09 Thread David G. Johnston
On Wed, Aug 9, 2017 at 10:37 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> Scott Marlowe <scott.marl...@gmail.com> writes:
> > Seems like something that should be handled by alter doesn't it?
>
> I have some vague memory that we intentionally didn't implement
> ALTER EXTENSION OWNER because we were unsure what it ought to do
> about ownership of objects belonging to the extension.  If the answer
> is "nothing" then it wouldn't be hard to add such a statement.
>

The documented contract of CREATE EXTENSION(1)/ALTER EXTENSION ADD
MEMBER(2) requires that the extension owner and the owner of the member
objects be one-and-the-same (I suppose the inclusion of DROP in (2) makes
this debatable).  I do not know what happens today if someone tries to
ALTER OBJECT SET OWNER on a member object to a role other than the owner of
the extension.  From the docs I'd suggest that it should fail.  Likewise,
ALTER EXTENSION OWNER should cascade to all members - which (3), and normal
dependency tracking, seems to make straight-forward.

1>The user who runs CREATE EXTENSION becomes the owner of the extension for
purposes of later privilege checks, as well as the owner of any objects
created by the extension's script.

2>You must own the extension to use ALTER EXTENSION. The ADD/DROP forms
require ownership of the added/dropped object as well.

3>CREATE EXTENSION additionally records the identities of all the created
objects, so that they can be dropped again if DROP EXTENSION is issued.

David J.


Re: [GENERAL] Any thoughts on making a psql meta-command "hide (or show alt text) if no results"?

2017-08-08 Thread David G. Johnston
On Tue, Aug 8, 2017 at 6:25 PM, Melvin Davidson <melvin6...@gmail.com>
wrote:

>
> *​H​ave you looked at the TUPLES ONLY option?*
>
> *-t* *--tuples-only*
>
> *Turn off printing of column names and result row count footers, etc. This
> is equivalent to the \t command.*
> *https://www.postgresql.org/docs/9.4/static/app-psql.html
> <https://www.postgresql.org/docs/9.4/static/app-psql.html>*
>
>
​Hadn't pondered it for this usage.  Now that I have I'd say having the
column names be visible for those few times the query returns results would
be nice.​  So, close but not quite.

I eventually saw, in one of the three descriptions for "tuples only" where
the caption is explicitly noted as being hidden in this mode. In my
proposed feature the caption, like the table it is attached to, would be
visible only conditionally.

David J.


[GENERAL] Any thoughts on making a psql meta-command "hide (or show alt text) if no results"?

2017-08-08 Thread David G. Johnston
Hey all, looking for thoughts on a feature request:

I run quite a few queries, using psql, that are intended for exceptional
situations.  When there are no results, which is expected, I still get the
table header and basic frame showing up in the output.  The option I'd like
is to be able to suppress the output of the empty table (and header if
there is one) or possibly substitute the empty table with user-supplied
text.

Thinking something that is used like \g

SELECT * FROM (VALUES (1)) vals (v) WHERE v = 0 \ghideifempty

[SQL] \galtifempty 'No values matching 0 in vals'

The names are descriptive, not suggestions...

David J.


[GENERAL] PostgreSQL with PowerBuilder, and Identity keys (serials)

2017-08-06 Thread David G. Johnston
On Saturday, August 5, 2017, Dan Cooperstock at Software4Nonprofits <
i...@software4nonprofits.com
<javascript:_e(%7B%7D,'cvml','i...@software4nonprofits.com');>> wrote:
>
> As I have mentioned in several replies, I have tested all of this code
> directly in SQL statements and they work perfectly. It's only the
> interaction with PowerBuilder that isn't working right, which is why I keep
> saying that further answers from anyone that hasn't solved this problem of
> using Postgres with PowerBuilder, and getting identity retrieval to work in
> PowerBuilder, are not at this point worthwhile.
>

If it is PostgreSQL's problem you should be able to setup statement logging
on the server and see what is happening at the SQL level.

David J.


Re: [GENERAL] select md5 result set

2017-08-02 Thread David G. Johnston
On Wed, Aug 2, 2017 at 4:50 PM, Peter Koukoulis <pkoukou...@gmail.com>
wrote:

> david, thanks for the help.
>
> Would this be the equivalent, for the statement in your email, for table
> TEST1 (x integer, y varchar(20)):
>
> ft_node=# SELECT md5(string_agg(vals::text, ''))
> ft_node-# from (select x,y from test1) vals(x,y);
> ?
>
>
​The subquery is redundant if you already have a table:

select md5(string_agg(test1::text, '')) from test1;

David J.​


Re: [GENERAL] select md5 result set

2017-08-02 Thread David G. Johnston
On Wed, Aug 2, 2017 at 3:42 PM, Peter Koukoulis <pkoukou...@gmail.com>
wrote:

>
> SQL> select dbms_sqlhash.gethash('select x,y from test1',2) as md5_value
> from dual;
>
> MD5_VALUE
> 
> 
> 9FDA7FA725B783172CA371DA04AD5754
>
>
> Can I do something similar in PostgreSQL ?
>
>
​Similar.​

 SELECT md5(string_agg(vals::text, ''))
 FROM ( VALUES (1, 2), (2, 3) ) vals (x, y)

​David J.


Re: [GENERAL] Would you add a --dry-run to pg_restore?

2017-08-02 Thread David G. Johnston
On Wed, Aug 2, 2017 at 10:10 AM, Edmundo Robles <edmu...@sw-argos.com>
wrote:

> I  imagine   pg_restore can  execute  the instructions on dump but  don't
>  write on disk.   just like David said: "tell me what is going to happen
> but don't actually do it"
>

You may wish to respond to the actual points being made as to why
separating out "writing" from "execution" doesn't provide meaningful value
- especially not for the effort it would take.

David J.


Re: [GENERAL] Do not INSERT if UPDATE fails

2017-08-02 Thread David G. Johnston
On Wed, Aug 2, 2017 at 8:58 AM, Alexander Farber <alexander.far...@gmail.com
> wrote:

> However if the user record is not found or the user already has vip_until
> >= CURRENT_TIMESTAMP (i.e. the user has already purchased "vip status") I
> would like to cancel the INSERT.
>
>
​You can "join" two DDL commands by using a Common Table Expression (CTE)
(i.e., WITH / SELECT)​.  You would need to make it so the UPDATE happens
first and if there are no results the INSERT simply becomes a no-op.

David J.


Re: [GENERAL] Would you add a --dry-run to pg_restore?

2017-08-02 Thread David G. Johnston
On Wed, Aug 2, 2017 at 9:02 AM, Edmundo Robles <edmu...@sw-argos.com> wrote:

> Will be great to have a dry  run option, because   the time  to verify
>  reduces a lot and  will save space on disk, because just  execute  with no
> write to disk.
>

"Dry run", the way I understand it, can be accomplished via the "-l" (ell)
switch.  It means - "tell me what is going to happen but don't actually do
it".  I don't know what to call what you are describing but I don't see how
it could reasonably be made to work and give the admin confidence that a
true restoration would be valid.  Maybe I'm just being unimaginative but at
minimum you'd have to write out the tables to disk so data could be loaded
into them.  Then data would have to be persisted in order to validate the
constraints at the end.

If you are running out of disk space you should get larger/more disks.

Personally, I'd probably setup a dedicated "test restore" cluster with lots
of HD and put stuff like "fsync=off" into its postgresql.conf.

I could see having a "--make-tables-unlogged" option that would convert,
on-the-fly, all CREATE TABLE commands to "CREATE UNLOGGED TABLE" commands.

David J.


Re: [GENERAL] RETURNS SETOF RECORD with 1 column

2017-07-28 Thread David G. Johnston
On Thursday, July 27, 2017, David G. Johnston <david.g.johns...@gmail.com>
wrote:

> On Thursday, July 27, 2017, Vincenzo Romano <vincenzo.rom...@notorand.it
> <javascript:_e(%7B%7D,'cvml','vincenzo.rom...@notorand.it');>> wrote:
>>
>> The main difference is that with RETURNS SETOF RECORD I still get the
>> "usual"(tm) function argument list in the usual place: between two
>> parentheses.
>> It's a matter of style. And a consistent one.
>> But I still don't get the point for not having it for a single column.
>>
>>
> Docs say: " When there are OUT or INOUT parameters, the RETURNS clause
> can be omitted. ". Sounds like you should you do just that.
>
>
Except you'd have no where to put the "setof" modifier...So, yeah, you
probably aren't going to personal style preference catered to here.

David J.


Re: [GENERAL] RETURNS SETOF RECORD with 1 column

2017-07-28 Thread David G. Johnston
On Thursday, July 27, 2017, Vincenzo Romano <vincenzo.rom...@notorand.it>
wrote:
>
> The main difference is that with RETURNS SETOF RECORD I still get the
> "usual"(tm) function argument list in the usual place: between two
> parentheses.
> It's a matter of style. And a consistent one.
> But I still don't get the point for not having it for a single column.
>
>
Docs say: " When there are OUT or INOUT parameters, the RETURNS clause can
be omitted. ". Sounds like you should you do just that.

David J.


Re: [GENERAL] Perfomance of IN-clause with many elements and possible solutions

2017-07-24 Thread David G. Johnston
On Mon, Jul 24, 2017 at 8:11 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> ​[*docs]
>  If the data were perfectly distributed, with the same
>  * number of tuples going into each available bucket, then the bucketsize
>  * fraction would be 1/nbuckets.  But this happy state of affairs will
> occur
>  * only if (a) there are at least nbuckets distinct data values, and (b)
>  * we have a not-too-skewed data distribution.  Otherwise the buckets will
>  * be nonuniformly occupied.


​Thanks, I have a better feel now.  Using this example (200 inner relation
rows) is pretty poor since at this scale there doesn't seem to be enough
data to make a noticeable difference.

But anyway, the above comment is only being applied when dealing with a
non-unique ​inner relation; however, the fraction used is 1/nbuckets for
any unique relation regardless of its size.

if (IsA(inner_path, UniquePath))
innerbucketsize = 1.0 / virtualbuckets;
else

And to clarify for others only reading this...the 200 on the "VALUES" node
is there because there are 200 literal values in the value_list.  The 200
on the resulting Hash (and HashAggregate in the example) node is there
because of DEFAULT_NUM_DISTINCT (changing the query limit to 300 only
changed the former).  Further, since it is only the default, the fraction
used charged out is 1/10 instead of 1/200 that would used if the 200 were a
real number instead - or 1/1024 if those 200 rows were known to be
themselves unique.

For me, I'm seeing that the expected number of input rows doesn't factor
into the innerbucketsize computation directly (possibly excepting a scaling
factor adjustment).

I can understand better, now, why this seemingly perfect example of a
semi-join query gets executed with an extra distinct/grouping node.

David J.


Re: [GENERAL] Perfomance of IN-clause with many elements and possible solutions

2017-07-24 Thread David G. Johnston
On Mon, Jul 24, 2017 at 7:58 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Mon, Jul 24, 2017 at 3:46 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>
>>
>> The cost to form the inner hash is basically negligible whether it's
>> de-duped or not, but if it's not (known) de-duped then the cost
>> estimate for the semijoin is going to rise some, and that discourages
>> selecting it.
>>
>
> ​Why does the "hash semi join" care about duplication of values on the
> inner relation?  Doesn't it only care whether a given bucket exists
> irrespective of its contents?
>

​Rather, it cares about the contents is-so-far as confirming that at least
one of the tuples in the bucket indeed has the same joining value as the
outer relation (lost track of the fact that two values can share the same
hash).  But once it finds one it can move onto the new outer relation tuple
while an inner join would have to spend more time looking for additional
matches.

David J.
​


Re: [GENERAL] Perfomance of IN-clause with many elements and possible solutions

2017-07-24 Thread David G. Johnston
On Mon, Jul 24, 2017 at 3:46 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

>
> The cost to form the inner hash is basically negligible whether it's
> de-duped or not, but if it's not (known) de-duped then the cost
> estimate for the semijoin is going to rise some, and that discourages
> selecting it.
>

​Why does the "hash semi join" care about duplication of values on the
inner relation?  Doesn't it only care whether a given bucket exists
irrespective of its contents?

Looking at those explains it would seem the "hash semi join" is simply an
inherently more expensive to execute compared to a "hash join" and that the
act of de-duping the inner relation would have to be quite expensive to
overcome the gap.  I cannot reconcile this with the previous paragraph
though...

Pointing me to the readme or code file (comments) that explains this in
more detail would be welcome.  Not sure what to grep for - "Hash Semi Join"
only turns up a couple of expected output results...

Thx.

David J.


Re: [GENERAL] Perfomance of IN-clause with many elements and possible solutions

2017-07-24 Thread David G. Johnston
On Mon, Jul 24, 2017 at 3:22 PM, Dmitry Lazurkin <dila...@gmail.com> wrote:

> ALTER TABLE ids ALTER COLUMN id SET NOT NULL;
> EXPLAIN (ANALYZE, BUFFERS) SELECT count(*) FROM ids WHERE id IN
> :values_clause;
>
>  Aggregate  (cost=245006.46..245006.47 rows=1 width=8) (actual
> time=3824.095..3824.095 rows=1 loops=1)
>Buffers: shared hit=44248
>->  Hash Join  (cost=7.50..235006.42 rows=419 width=0) (actual
> time=1.108..3327.112 rows=3998646 loops=1)
>...
>

​You haven't constrained the outer relation (i.e., :values_clause) to be
non-null which is what I believe is required for the semi-join algorithm to
be considered.​

David J.


Re: [GENERAL] Perfomance of IN-clause with many elements and possible solutions

2017-07-24 Thread David G. Johnston
On Mon, Jul 24, 2017 at 3:12 PM, Dmitry Lazurkin <dila...@gmail.com> wrote:

> And I have one question. I don't understand why IN-VALUES doesn't use
> Semi-Join? PostgreSQL has Hash Semi-Join...  For which task the database
> has node of this type?
>

​Semi-Join is canonically written as:

SELECT *
FROM tbl
WHERE EXISTS (SELECT 1 FROM tbl2 WHERE tbl.id = tbl2.id)

The main difference between IN and EXISTS is NULL semantics.

David J.


Re: [GENERAL] Perfomance of IN-clause with many elements and possible solutions

2017-07-24 Thread David G. Johnston
On Sun, Jul 23, 2017 at 4:35 AM, dilaz03 . <dila...@gmail.com> wrote:

> - IN-VALUES clause adds new node to plan. Has additional node big
> overhead? How about filter by two or more IN-VALUES clause?
>

​IN-VALUES is just another word for "TABLE" which is another word for
"RELATION".  Writing relational database queries that use explicit
relations is generally going to give you the best performance.

Basically you want to write something like:

SELECT *
FROM ids
JOIN ( :values_clause ) vc (vid) ON (vc.vid = ids.id)​

or

WITH vc AS (SELECT vid FROM  ORDER BY ... LIMIT )
SELECT *
FROM ids
JOIN vc ON (vid = ids.id)

"IN ('l1','l2','l3')" is nice and all but as demonstrated the mechanics of
executing that are different, and slower, than processing relations and
tuples.  For a small number of items the difference is generally not
meaningful and so the convenience of writing (IN (...)) is worth taking.

David J.


Re: [GENERAL] What is the problem with this query?

2017-07-23 Thread David G. Johnston
On Sunday, July 23, 2017, Igor Korot <ikoro...@gmail.com> wrote:

> Hi, David,
>
> On Sun, Jul 23, 2017 at 5:07 PM, David G. Johnston
> <david.g.johns...@gmail.com <javascript:;>> wrote:
> > On Sunday, July 23, 2017, Igor Korot <ikoro...@gmail.com <javascript:;>>
> wrote:
> >>
> >> Is "IF" operator not supported by PostgreSQL
>
> So how do I write this properly?
>
>
CREATE FUNCTION or a DO block. See docs for usage.

Since the language is pl/pgsql you can learn how to write it here.

 https://www.postgresql.org/docs/9.6/static/plpgsql.html

David J.


Re: [GENERAL] What is the problem with this query?

2017-07-23 Thread David G. Johnston
On Sunday, July 23, 2017, Igor Korot <ikoro...@gmail.com> wrote:
>
> Is "IF" operator not supported by PostgreSQL
>

IF is pl/pgsql, not SQL.

David J.


Re: [GENERAL] Postgres csv logging

2017-07-21 Thread David G. Johnston
It is customary to indicate when you've posted the same question to other
forums - in this case SO.

https://dba.stackexchange.com/questions/180263/postgres-log-query-and-command-tag-to-csv

As the comment there says your config and your output seem at odds.  Though
I think your confusion is still adequately represented.

On Wed, Jul 19, 2017 at 4:41 PM, Alessandro_feliz <
alessandro_fe...@hotmail.com> wrote:

> Why is the first line, that has the query executed, saying that the command
> tag is "idle", shouldn't it say "SELECT"?


"Command tag: type of session's current command" - given your example I
suppose this means "initial state of the session" as opposed to being a
category tag of what the current line is doing.  What you are seeing is
"start at idle and parse" the incoming statement text.  That is one action
- parse.  The subsequent action is "execute" the contents of the SELECT
that is now in the active state of the session.

If the above is correct the docs could be more clear since I'd agree that
your interpretation of the field seems more logical - though knowing
initial state is quite useful in its own right.

David J.


Re: [GENERAL] Backward compatibility

2017-07-21 Thread David G. Johnston
On Fri, Jul 21, 2017 at 8:49 AM, Igor Korot <ikoro...@gmail.com> wrote:

> MySQL uses this:
> https://dev.mysql.com/doc/refman/5.7/en/mysql-get-server-version.html.
> Is it safe to assume that PostgreSQL calculates the version the same way?
>
​
Yes and no.  Things are changing with this next release.  The next two
major releases will be:

10.x  (or 10.0.x using historical nomenclature - 1000xx)
11.x (or 11.0.x using historical nomenclature - 1100xx)

For prior releases the major versions are:

9.2.x
9.3.x
9.4.x
9.5.x
9.6.x

If you want to consider the 9 to be "major" and the .[2-6] to be minor for
mechanical purposes that's fine but the change from 9.5 to 9.6 is a major
change with backward incompatibilities - which a minor change doesn't
allow.  In the new setup the thing you call "minor" will always remain at
zero in order to eventually mitigate the need to have this kind of
discussion. Since it is always going to be "0" we simply omit printing it.

David J.


Re: [GENERAL] Backward compatibility

2017-07-20 Thread David G. Johnston
On Thursday, July 20, 2017, David G. Johnston <david.g.johns...@gmail.com>
wrote:

> On Thursday, July 20, 2017, Igor Korot <ikoro...@gmail.com
> <javascript:_e(%7B%7D,'cvml','ikoro...@gmail.com');>> wrote:
>
>> Hi, David,
>>
>> On Thu, Jul 20, 2017 at 10:23 PM, David G. Johnston
>> <david.g.johns...@gmail.com> wrote:
>> > On Thu, Jul 20, 2017 at 7:13 PM, Igor Korot <ikoro...@gmail.com> wrote:
>>
>> >> Is there a query or a libpg function which can return the version of
>> >> the server I'm running?
>> >
>
>
> Its PQ not PG - and I'd doubt it but I don't directly use libpq.
>
>
Actually, The docs do cover how to do this directly in libpq.

David J.


Re: [GENERAL] Backward compatibility

2017-07-20 Thread David G. Johnston
On Thursday, July 20, 2017, Igor Korot <ikoro...@gmail.com> wrote:

> Hi, David,
>
> On Thu, Jul 20, 2017 at 10:23 PM, David G. Johnston
> <david.g.johns...@gmail.com <javascript:;>> wrote:
> > On Thu, Jul 20, 2017 at 7:13 PM, Igor Korot <ikoro...@gmail.com
> <javascript:;>> wrote:
>
> >> Is there a query or a libpg function which can return the version of
> >> the server I'm running?
> >


Its PQ not PG - and I'd doubt it but I don't directly use libpq.


> Is there a way to get a version_major, version_minr and version_extra?
>
>
There is no 'extra'.  The last two digits are the minor and everything
before is the major.  Usually you'd use inequality comparisons so there'd
be no point.  If here is a way beside parsing I'm not aware of it.

David J.


Re: [GENERAL] Backward compatibility

2017-07-20 Thread David G. Johnston
On Thu, Jul 20, 2017 at 7:23 PM, Igor Korot <ikoro...@gmail.com> wrote:

> On Thu, Jul 20, 2017 at 10:19 PM, Andreas Kretschmer
> <andr...@a-kretschmer.de> wrote:
> >
> >>Is there a query or a libpg function which can return the version of
> >>the server I'm running?
>


> > Select version();
>
> Here is the results:
>
> draft=# SELECT version();
>
>
>   version
> 
> 
> -
>  PostgreSQL 9.1.24 on x86_64-apple-darwin, compiled by
> i686-apple-darwin10-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc.
> build 5658) (LLVM build 2335.6), 64-bit
> (1 row)
>
> Is there a way to get just "9.1.24" without everything else?
>

​SHOW server_version_num;

90124​ should be the result (don't have that version installed to
copy-paste)

90506 is the version I have at my fingertips.

David J.


  1   2   3   4   5   6   7   8   9   10   >