Re: Postgres hangs for the query "lock table in exclusive mode"

2018-02-22 Thread David G. Johnston
On Thursday, February 22, 2018, Vinodh NV  wrote:
>
>
> Postgres hangs for the query "lock table  in exclusive mode"
> Can you please let me know what could be the reason and how to resolve it?
>
>
Not readily, but that's why the pg_locks view exists.

https://www.postgresql.org/docs/9.6/static/view-pg-locks.html

David J.


Re: Given a set of daterange, finding the continuous range that includes a particular date

2018-02-22 Thread Paul Jungwirth

On 02/22/2018 04:44 PM, Ken Tanzer wrote:
Hi, hoping to get some help with this.  I'm needing to take a specific 
date, a series of dateranges  and, given a specific date, return a 
single conitinuous daterange that includes that date.


The part about joining multiple touching dateranges to give a single 
continuous daterange is what Richard Snodgrass calls "coalescing" in 
*Developing Time-Oriented Database Applications in SQL*, pages 159 - 
169, available printed or as a free PDF at 
http://www2.cs.arizona.edu/~rts/publications.html (His approach also 
supports overlapping ranges, but it sounds like you don't need that.) If 
you had a coalesced view (or maybe a set-returning function), you could 
do this:


SELECT  term
FROMcoalesced_staff_assign
WHERE   client_id = 5
AND term @> '2018-15-01'
;

I can't think of any way to avoid scanning all of a given client's 
records, but hopefully client_id alone would be selective enough to 
still give you good performance.


Oh also: in reading Snodgrass's SQL, note that he assumes closed-open 
ranges (i.e. '[)'), so you'll need to adjust some things to fit with 
your closed-closed ranges (or always use `staff_assign_date_end - 
INTERVAL '1 day'` if every assignment is at least 1 day long). On the 
other hand with built-in range types you might be able to simplify his 
pure-SQL solutions.


--
Paul  ~{:-)
p...@illuminatedcomputing.com



Re: Given a set of daterange, finding the continuous range that includes a particular date

2018-02-22 Thread Adrian Klaver

On 02/22/2018 09:05 PM, Adrian Klaver wrote:

On 02/22/2018 05:23 PM, Ken Tanzer wrote:



On Thu, Feb 22, 2018 at 5:05 PM, Adrian Klaver 
> wrote:


    On 02/22/2018 04:58 PM, Ken Tanzer wrote:



    On Thu, Feb 22, 2018 at 4:53 PM, Adrian Klaver
    
    >> wrote:

     On 02/22/2018 04:44 PM, Ken Tanzer wrote:

         Hi, hoping to get some help with this.  I'm needing to
    take a
         specific date, a series of dateranges  and, given a
    specific
         date, return a single conitinuous daterange that
    includes that date.

         To elaborate a bit, I've got lots of tables that
    include start
         and end dates.  For example:

         CREATE TABLE tbl_staff_assign (
               staff_assign_id         SERIAL PRIMARY KEY,
               client_id               INTEGER NOT NULL 
REFERENCES

         tbl_client (client_id),
               staff_id                INTEGER REFERENCES
         tbl_staff(staff_id),
               staff_assign_type_code      VARCHAR(10) NOT NULL
         REFERENCES tbl_l_staff_assign_type
    (staff_assign_type_code),
               staff_assign_date           DATE NOT NULL,
               staff_assign_date_end       DATE,
         ...

         So a client might leave a progrma and then return
    later, or they
         might simply switch to another staff_id.  (In which
    case one
         record will have and end date, and the next record will
    start on
         the next day.)  In this case I need to know "what
    period were
         they continuously in the program that includes X date?"
  So I'd
         like to be able to do something like:

         "SELECT staff_assign_date,continuous_daterange(
         staff_assign_date, (SELECT
    array_agg(daterange(staff_assign_date,staff_assign_date_end,'[]')
         ) FROM staff_assign sa2 WHERE
    sa2.client_id=sa.client_id) FROM
         staff_assign sa

         I've done this before with procedures specific to a
    particular
         table, and working with the start and end dates.  I'm 
now

         wanting to try to do this once generically that will
    work for
         all my cases.  So I'm hoping to do this in a way that
         performance isn't horrible.  And it's a little unclear
    to me how
         much and how I might be able to use the daterange
    operators to
         accomplish this efficiently.


     The operator I use to solve similar problems:

    https://www.postgresql.org/docs/10/static/functions-range.html
    
    >

     @>      contains element 
'[2011-01-01,2011-03-01)'::tsrange

     @> '2011-01-10'::timestamp   t


    Thanks Adrian.  But how would you apply that to this situation,
    where I have a series of (quite possibly discontinuous) 
dateranges?



    This is going to depend on a more formal definition of the problem
    with some sample data. Right now I am trying to reconcile "what
    period were they continuously in the program that includes X date?"
    with "where I have a series of (quite possibly discontinuous)
    dateranges? " Maybe its just me,  I can't see how discontinuous can
    also be continuously.



*Start   End*
-- Episode 1
1/1/16  3/30/16
4/1/16  4/30/16
-- Episode 2
1/1/18  1/31/18
2/1/18  NULL

Given 1/15/18, should return 1/1/18-->NULL
Given 1/15/16, should return 1/1/16-->4/30/16


Just thinking out loud here, in a function:

1) For a client_id you can find the min(staff_assign_date).

2) You can create a max(staff_assign_date_end) by using 
COALESCE(staff_assign_date_end, current_date)


3) You now have the outer range for the episodes.

4) In that range of dates for each staff_assign_date you find the 
staff_assign_date_end that greater then the staff_assign_date but less 
then the next staff_assign_date. That will give you your episodes.


5) You can then test to see if the X date is in the calculated episodes.

6) As a shortcut you can find min(staff_assign_date) that is less then 
the X date and see if the staff_assign_date_end is greater then the X date.




Correction to 6)

6) As a shortcut you can find max(staff_assign_date) that is less then 
the X date and see if the staff_assign_date_end is greater then the X date.

Re: Given a set of daterange, finding the continuous range that includes a particular date

2018-02-22 Thread Adrian Klaver

On 02/22/2018 05:23 PM, Ken Tanzer wrote:



On Thu, Feb 22, 2018 at 5:05 PM, Adrian Klaver 
> wrote:


On 02/22/2018 04:58 PM, Ken Tanzer wrote:



On Thu, Feb 22, 2018 at 4:53 PM, Adrian Klaver

>> wrote:

     On 02/22/2018 04:44 PM, Ken Tanzer wrote:

         Hi, hoping to get some help with this.  I'm needing to
take a
         specific date, a series of dateranges  and, given a
specific
         date, return a single conitinuous daterange that
includes that date.

         To elaborate a bit, I've got lots of tables that
include start
         and end dates.  For example:

         CREATE TABLE tbl_staff_assign (
               staff_assign_id         SERIAL PRIMARY KEY,
               client_id               INTEGER NOT NULL REFERENCES
         tbl_client (client_id),
               staff_id                INTEGER REFERENCES
         tbl_staff(staff_id),
               staff_assign_type_code      VARCHAR(10) NOT NULL
         REFERENCES tbl_l_staff_assign_type
(staff_assign_type_code),
               staff_assign_date           DATE NOT NULL,
               staff_assign_date_end       DATE,
         ...

         So a client might leave a progrma and then return
later, or they
         might simply switch to another staff_id.  (In which
case one
         record will have and end date, and the next record will
start on
         the next day.)  In this case I need to know "what
period were
         they continuously in the program that includes X date?"
  So I'd
         like to be able to do something like:

         "SELECT staff_assign_date,continuous_daterange(
         staff_assign_date, (SELECT

array_agg(daterange(staff_assign_date,staff_assign_date_end,'[]')

         ) FROM staff_assign sa2 WHERE
sa2.client_id=sa.client_id) FROM
         staff_assign sa

         I've done this before with procedures specific to a
particular
         table, and working with the start and end dates.  I'm now
         wanting to try to do this once generically that will
work for
         all my cases.  So I'm hoping to do this in a way that
         performance isn't horrible.  And it's a little unclear
to me how
         much and how I might be able to use the daterange
operators to
         accomplish this efficiently.


     The operator I use to solve similar problems:

https://www.postgresql.org/docs/10/static/functions-range.html


>

     @>      contains element   
'[2011-01-01,2011-03-01)'::tsrange

     @> '2011-01-10'::timestamp   t


Thanks Adrian.  But how would you apply that to this situation,
where I have a series of (quite possibly discontinuous) dateranges?


This is going to depend on a more formal definition of the problem
with some sample data. Right now I am trying to reconcile "what
period were they continuously in the program that includes X date?"
with "where I have a series of (quite possibly discontinuous)
dateranges? " Maybe its just me,  I can't see how discontinuous can
also be continuously.



*Start   End*
-- Episode 1
1/1/16  3/30/16
4/1/16  4/30/16
-- Episode 2
1/1/18  1/31/18
2/1/18  NULL

Given 1/15/18, should return 1/1/18-->NULL
Given 1/15/16, should return 1/1/16-->4/30/16


Just thinking out loud here, in a function:

1) For a client_id you can find the min(staff_assign_date).

2) You can create a max(staff_assign_date_end) by using 
COALESCE(staff_assign_date_end, current_date)


3) You now have the outer range for the episodes.

4) In that range of dates for each staff_assign_date you find the 
staff_assign_date_end that greater then the staff_assign_date but less 
then the next staff_assign_date. That will give you your episodes.


5) You can then test to see if the X date is in the calculated episodes.

6) As a shortcut you can find min(staff_assign_date) that is less then 
the X date and see if the staff_assign_date_end is greater then the X date.




Thinking about this a bit more, I'm wondering if a window function could 
be used for this?  I've used them a bit, but definitely never wrote one. 
   Something like



Postgres hangs for the query "lock table in exclusive mode"

2018-02-22 Thread Vinodh NV
Hi,


Postgres hangs for the query "lock table  in exclusive mode"
Can you please let me know what could be the reason and how to resolve it?


Regards,
Vinodh.


Re: Getting a primitive numeric value from "DatumGetNumeric"?

2018-02-22 Thread Tom Lane
Michael Paquier  writes:
> On Thu, Feb 22, 2018 at 08:00:45PM -0500, Demitri Muna wrote:
>> Out of curiosity, how does one read a numeric type from within a C
>> extension (i.e. get a number value out of the Datum type)?

> PG_GETARG_NUMERIC(), no?

I think the core point here is that PG's "numeric" type *isn't* any
primitive C type; it's a variable-length BCD encoding.  If you want
to deal with it you can, but you must spend some time reading
src/backend/utils/adt/numeric.c, as Michael suggests.  If you just
want a convenient C approximation, then work with float8 and let
the implicit-coercion machinery do the conversion for you.

regards, tom lane



Re: Not sure if I should CREATE INDEX for text columns on which I plan to filter later

2018-02-22 Thread David Rowley
On 23 February 2018 at 04:00, Bill Moran  wrote:
> 2) The negative impact of an unused index is tiny compared to the negative
>  impact of a missing index.

I'd say that would entirely depend on the workload of the table and
the entire cluster. Disk space and I/O to write WAL and index pages to
is surely a finite resource. Not to mention the possibility of
disallowing HOT-Updates in the heap.

It would seem to me that anyone using the "shotgun" indexing method
may end up having to learn more about indexing the hard way.  Learning
the hard way is something I like to try to avoid, personally. Probably
it all has to come down to how important it is that your app actually
can handle the load vs devs/dba experience level divided by time, both
of the latter two are also a finite resource. So, it probably all has
to be carefully balanced and quite possibly a person's opinion
strongly relates to their experience.  If you were in the air traffic
control business, perhaps your opinion might not be the same!? ...
Sorry, the aeroplane crashed because the replica instance lagged and
the plane's location wasn't updated... Analysis shows that the DBA
indexed every column in the table and the WAL volume was more than the
network's bandwidth could handle over the holiday period.  (Note: I
know nothing about air traffic control, but it does seem like
something you'd want to make stable systems for, games on the
internet, probably less so...).


-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: Performance issues during backup

2018-02-22 Thread PT
On Fri, 23 Feb 2018 01:46:22 +
Dylan Luong  wrote:

> Hi
> Thanks for the rely.
> I have trialed the ionice -c 2 -n 7 tar…. change to our backup script and it 
> appears to have helped but not by much.

If "ionice -c 2 -n 7" helped some, maybe try "ionice -c 3" to set IO priority
to the lowest possible level.

However, I've used this technique many times on heavily loaded systems without
issue. If you're having to squirm this much to avoid problems, you probably
need to get better hardware or investigate the possibility that your
hardware is faulty or some other IO related issue.

> The affected queries are more of the update/delete/insert queries. Could 
> pg_start_backup be causing locking of some sort.

Not in my experience. And the fact that they are write queries having trouble
makes me theorize that you're saturating the write capacity of your disks.

> From: Rene Romero Benavides [mailto:rene.romer...@gmail.com]
> Sent: Wednesday, 21 February 2018 1:37 AM
> To: Laurenz Albe 
> Cc: Dylan Luong ; pgsql-general@lists.postgresql.org
> Subject: Re: Performance issues during backup
> 
> What about sending the backup to a different server? through ssh / rsync or 
> something, that would save lots of IO activity
> 
> 2018-02-20 2:02 GMT-06:00 Laurenz Albe 
> >:
> Dylan Luong wrote:
> > We perform nighty base backup of our production PostgreSQL instance. We 
> > have a script that basically puts the instance
> > into back mode and then backs up (tar) the /Data directory and then takes 
> > it out of backup mode.
> > Ie,
> > psql -c "SELECT pg_start_backup('${DATE}');"
> > tar -cvf - ${DATA_DIR} --exclude ${DATA_DIR}/pg_log | split -d -b 
> > $TAR_SPLIT_SIZE - ${BACKUP_DIR}/${BACKUP_NAME}
> > psql -c "SELECT pg_stop_backup();"
> >
> > The size of our database is about 250GB and it usually takes about 1 hour 
> > to backup.
> > During this time, we have performance issue where queries can take up to 
> > 15secs to return where normally it takes 2 to 3 seconds.
> > During this time (1:30am) usage is low (less than 10 users) on the system.
> >
> > Has anyone experience the same problem and any suggestions where to look at 
> > to resolve the problem?
> 
> The "tar" is probably taking up too much I/O bandwidth.
> 
> Assuming this is Linux, you could run it with
> 
>   ionice -c 2 -n 7 tar ...
> 
> or
> 
>   ionice -c 3 tar ...
> 
> Of course then you can expect the backup to take more time.
> 
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
> 
> 
> 
> --
> El genio es 1% inspiración y 99% transpiración.
> Thomas Alva Edison
> http://pglearn.blogspot.mx/
> 


-- 
Bill Moran



Re: Getting a primitive numeric value from "DatumGetNumeric"?

2018-02-22 Thread Michael Paquier
On Thu, Feb 22, 2018 at 08:00:45PM -0500, Demitri Muna wrote:
> Ah, I wasn’t aware of implicit coercion. Yes, that solves the problem 
> perfectly, thanks.
> 
> Out of curiosity, how does one read a numeric type from within a C
> extension (i.e. get a number value out of the Datum type)? I ask as I
> was unable to find an example and there are a few open questions on
> Stack Overflow
> (e.g. 
> https://stackoverflow.com/questions/12588554/postgres-c-function-passing-returning-numerics).
>  

PG_GETARG_NUMERIC(), no? When working on implementing your own data
types or when hacking out functions which manipulate arguments of an
existing datatype, looking at the input and output functions help a
lot.  In your case, numeric_in and numeric_out in
src/backend/utils/adt/numeric.c is full of hints.
--
Michael


signature.asc
Description: PGP signature


RE: Performance issues during backup

2018-02-22 Thread Dylan Luong
Hi
Thanks for the rely.
I have trialed the ionice -c 2 -n 7 tar…. change to our backup script and it 
appears to have helped but not by much.
The affected queries are more of the update/delete/insert queries. Could 
pg_start_backup be causing locking of some sort.
Regards
Dylan

From: Rene Romero Benavides [mailto:rene.romer...@gmail.com]
Sent: Wednesday, 21 February 2018 1:37 AM
To: Laurenz Albe 
Cc: Dylan Luong ; pgsql-general@lists.postgresql.org
Subject: Re: Performance issues during backup

What about sending the backup to a different server? through ssh / rsync or 
something, that would save lots of IO activity

2018-02-20 2:02 GMT-06:00 Laurenz Albe 
>:
Dylan Luong wrote:
> We perform nighty base backup of our production PostgreSQL instance. We have 
> a script that basically puts the instance
> into back mode and then backs up (tar) the /Data directory and then takes it 
> out of backup mode.
> Ie,
> psql -c "SELECT pg_start_backup('${DATE}');"
> tar -cvf - ${DATA_DIR} --exclude ${DATA_DIR}/pg_log | split -d -b 
> $TAR_SPLIT_SIZE - ${BACKUP_DIR}/${BACKUP_NAME}
> psql -c "SELECT pg_stop_backup();"
>
> The size of our database is about 250GB and it usually takes about 1 hour to 
> backup.
> During this time, we have performance issue where queries can take up to 
> 15secs to return where normally it takes 2 to 3 seconds.
> During this time (1:30am) usage is low (less than 10 users) on the system.
>
> Has anyone experience the same problem and any suggestions where to look at 
> to resolve the problem?

The "tar" is probably taking up too much I/O bandwidth.

Assuming this is Linux, you could run it with

  ionice -c 2 -n 7 tar ...

or

  ionice -c 3 tar ...

Of course then you can expect the backup to take more time.

Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com



--
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison
http://pglearn.blogspot.mx/



Re: Getting a primitive numeric value from "DatumGetNumeric"?

2018-02-22 Thread Demitri Muna
Hi Tom,

On Feb 20, 2018, at 10:54 PM, Tom Lane  wrote:

> Well, the implicit coercions work in your favor in this particular case.

Ah, I wasn’t aware of implicit coercion. Yes, that solves the problem 
perfectly, thanks.

Out of curiosity, how does one read a numeric type from within a C extension 
(i.e. get a number value out of the Datum type)? I ask as I was unable to find 
an example and there are a few open questions on Stack Overflow (e.g. 
https://stackoverflow.com/questions/12588554/postgres-c-function-passing-returning-numerics).

Thanks,
Demitri




Re: Given a set of daterange, finding the continuous range that includes a particular date

2018-02-22 Thread Ken Tanzer
On Thu, Feb 22, 2018 at 4:53 PM, Adrian Klaver 
wrote:

> On 02/22/2018 04:44 PM, Ken Tanzer wrote:
>
>> Hi, hoping to get some help with this.  I'm needing to take a specific
>> date, a series of dateranges  and, given a specific date, return a single
>> conitinuous daterange that includes that date.
>>
>> To elaborate a bit, I've got lots of tables that include start and end
>> dates.  For example:
>>
>> CREATE TABLE tbl_staff_assign (
>>  staff_assign_id SERIAL PRIMARY KEY,
>>  client_id   INTEGER NOT NULL REFERENCES tbl_client
>> (client_id),
>>  staff_idINTEGER REFERENCES tbl_staff(staff_id),
>>  staff_assign_type_code  VARCHAR(10) NOT NULL REFERENCES
>> tbl_l_staff_assign_type (staff_assign_type_code),
>>  staff_assign_date   DATE NOT NULL,
>>  staff_assign_date_end   DATE,
>> ...
>>
>> So a client might leave a progrma and then return later, or they might
>> simply switch to another staff_id.  (In which case one record will have and
>> end date, and the next record will start on the next day.)  In this case I
>> need to know "what period were they continuously in the program that
>> includes X date?"  So I'd like to be able to do something like:
>>
>> "SELECT staff_assign_date,continuous_daterange( staff_assign_date,
>> (SELECT array_agg(daterange(staff_assign_date,staff_assign_date_end,'[]')
>> ) FROM staff_assign sa2 WHERE sa2.client_id=sa.client_id) FROM
>> staff_assign sa
>>
>> I've done this before with procedures specific to a particular table, and
>> working with the start and end dates.  I'm now wanting to try to do this
>> once generically that will work for all my cases.  So I'm hoping to do this
>> in a way that performance isn't horrible.  And it's a little unclear to me
>> how much and how I might be able to use the daterange operators to
>> accomplish this efficiently.
>>
>
> The operator I use to solve similar problems:
>
> https://www.postgresql.org/docs/10/static/functions-range.html
>
> @>  contains element'[2011-01-01,2011-03-01)'::tsrange @>
> '2011-01-10'::timestamp   t
>
>
Thanks Adrian.  But how would you apply that to this situation, where I
have a series of (quite possibly discontinuous) dateranges?


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Given a set of daterange, finding the continuous range that includes a particular date

2018-02-22 Thread Ken Tanzer
Hi, hoping to get some help with this.  I'm needing to take a specific
date, a series of dateranges  and, given a specific date, return a single
conitinuous daterange that includes that date.

To elaborate a bit, I've got lots of tables that include start and end
dates.  For example:

CREATE TABLE tbl_staff_assign (
staff_assign_id SERIAL PRIMARY KEY,
client_id   INTEGER NOT NULL REFERENCES tbl_client
(client_id),
staff_idINTEGER REFERENCES tbl_staff(staff_id),
staff_assign_type_code  VARCHAR(10) NOT NULL REFERENCES
tbl_l_staff_assign_type (staff_assign_type_code),
staff_assign_date   DATE NOT NULL,
staff_assign_date_end   DATE,
...

So a client might leave a progrma and then return later, or they might
simply switch to another staff_id.  (In which case one record will have and
end date, and the next record will start on the next day.)  In this case I
need to know "what period were they continuously in the program that
includes X date?"  So I'd like to be able to do something like:

"SELECT staff_assign_date,continuous_daterange( staff_assign_date, (SELECT
array_agg(daterange(staff_assign_date,staff_assign_date_end,'[]')
) FROM staff_assign sa2 WHERE sa2.client_id=sa.client_id) FROM staff_assign
sa

I've done this before with procedures specific to a particular table, and
working with the start and end dates.  I'm now wanting to try to do this
once generically that will work for all my cases.  So I'm hoping to do this
in a way that performance isn't horrible.  And it's a little unclear to me
how much and how I might be able to use the daterange operators to
accomplish this efficiently.

Any advice or suggestions or ways to go about this appreciated.  Thanks!

Ken

p.s.,  Another small wrinkle is these records aren't always perfect, and
ideally I'd allow for an optional fudge factor that would allow small gaps
to be ignored.  I could just add that in every query
(start_date+2,end_date-2), but it might be nice to have the function do it,
if it didn't badly hurt performance.


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: On error mesage (0x80090325) whilst installing Apps Stack Builder

2018-02-22 Thread Alan Hodgson
> > The problem is that I keep getting the following error mesage:
> > -
> > ---
> > A certificate verification problem was encountered whilst accessing
> > https//www.postgresql.org/applications-v2.xml
> > schcannel: next InitializeSecurityContext failed:
> > SEC_E_UNTRUSTED_ROOT (0x80090325) - The certificate chain was
> > issued by an authority that is not trusted.
> > This means that the source of the download cannot be verified. it
> > is
> > recommended that you do not continue with the download as it may
> > be coming from a site that is preending to be the intended download
> > site and may contain viruses or malware.
> > 
> > Do you wish to continue?
> > -
> > --
> > I intend to continue, but there again I keep getting the following
> > error message:
> > -
> > ---
> > Couldn't access the URL
> > 'https://www.postgresql.org/applications-v2.xml'.
> > 
> > ERROR: schannel: next InitializeSecurityContext failed:
> > SEC_E_UNTRUSTED_ROOT (0x80090325) - The certificate chain was
> > issued by an authority that is not trusted.
> > -
> > --
> > I should be most grateful if any of you would provide me with any
> > advice.
> > 

Stack builder appears to be written in Java. You need a JDK newer than
8u101, that was the first version to include the LetsEncrypt root.

On error mesage (0x80090325) whilst installing Apps Stack Builder

2018-02-22 Thread Pablo Liendo

Hi,

Although I'm fairly familiar with IT in general and DBMS in
particular, I'm new to GIS. After reviewing several sources it seems
that a good choice to star t is PostgreSQL.

I downloaded and installed successfuly the setup, on a PC desktop
running Windows Seven Home Premium.

Then came Stack Builder 4.1.0's turn. When instructed to select the
installation I am installing software I select "PostgreSQL 9.5 (x64)
on port 5432"

The problem is that I keep getting the following error mesage:

A certificate verification problem was encountered whilst accessing
https//www.postgresql.org/applications-v2.xml
schcannel: next InitializeSecurityContext failed:
SEC_E_UNTRUSTED_ROOT (0x80090325) - The certificate chain was
issued by an authority that is not trusted.
This means that the source of the download cannot be verified. it is
recommended that you do not continue with the download as it may
be coming from a site that is preending to be the intended download
site and may contain viruses or malware.

Do you wish to continue?
---
I intend to continue, but there again I keep getting the following
error message:

Couldn't access the URL
'https://www.postgresql.org/applications-v2.xml'.

ERROR: schannel: next InitializeSecurityContext failed:
SEC_E_UNTRUSTED_ROOT (0x80090325) - The certificate chain was
issued by an authority that is not trusted.
---
I should be most grateful if any of you would provide me with any advice.

Best regards.
Pablo Liendo
Caracas




---
El software de antivirus Avast ha analizado este correo electrónico en busca de 
virus.
https://www.avast.com/antivirus




Re: Any reason not to show "null input" (strict) column in \df+ output?

2018-02-22 Thread Tom Lane
"David G. Johnston"  writes:
> The STRICT-ness of a function seems like it should be viewable when using
> \df+; is its absence just an oversight?

I've been annoyed by that in the past.  I have some vague memory of
proposing adding it and being shot down, on don't-remember-what grounds.
But I couldn't find any such discussion in a quick trawl of the archives,
so maybe I'm misremembering.  The only reasonable counter-argument I can
think of now is that it'd make the display Even Wider ... but \df+ long
since blew past any reasonable screen width.

regards, tom lane



Any reason not to show "null input" (strict) column in \df+ output?

2018-02-22 Thread David G. Johnston
The STRICT-ness of a function seems like it should be viewable when using
\df+; is its absence just an oversight?

David J.


Re: Not sure if I should CREATE INDEX for text columns on which I plan to filter later

2018-02-22 Thread Bill Moran
On Thu, 22 Feb 2018 03:57:34 +0100
Tomas Vondra  wrote:
> 
> On 02/21/2018 05:00 PM, Bill Moran wrote:
> > On Wed, 21 Feb 2018 13:33:18 +0100
> > Alexander Farber  wrote:
> > 
> >> Hi Martin -
> >>
> >> On Wed, Feb 21, 2018 at 1:26 PM, Martin Moore 
> >> wrote:
> >>
> >>> I’m no expert but I’d think it unlikely an index would be considered for a
> >>> table with only 100 rows in. Also I’m pretty sure only one index per table
> >>> is used, so you’d want to put state1 and state2 in one index.
> >>
> >> I hope to have more records in the words_games table later when my game is
> >> launched (currently in soft launch/beta).
> > 
> > To elaborate+clarify Martin's comments.
> > 
> > The index will not be used while the table is small because it's actually 
> > slower
> > to process an index than it is to just read the entire table. However, as 
> > the
> > table gets more rows, these timings will reverse and Postgres will start 
> > using
> > the indexes. It's probably best to just create them even though the table is
> > small. The performance improvement you'll get when the table grows will be
> > well worth it, and it avoids the problem of trying to remember to create it 
> > later.
> > 
> > However, Martin's other comment about only using a single index is 
> > incorrect.
> > Postgres can use multiple indexes per query, so it's often good practace to
> > put indexes on every column that might ever be used in a WHERE clause.
> 
> I call this practice "shotgun" and generally discourage people from
> using it. It seems attractive, but not every where condition can be
> evaluated using an index, and every index has maintenance overhead.
> 
> There are cases when it's somewhat reasonable (e.g. when you don't know
> which columns will be referenced by WHERE conditions, and data ingestion
> has lower priority than queries). But that doesn't seem to be the case
> here - you know the WHERE conditions, and people are likely sending a
> lot of inserts (and expecting low latency responses).

Can't _really_ disagree with anything you're saying there ... but I disagree
with the overall sentament for the following reasons:

1) Not everyone has easy access to experienced people like you and I. As a
 result, I often recommend the "least likely to be wrong" course of action 
instead
 of recommending investigation that requires expertise that the original poster
 might not possess ... after all, if they had the experience to do all the
 research, it's unlikely that they'd be asking this question in the first
 place.
2) The negative impact of an unused index is tiny compared to the negative
 impact of a missing index.
3) Dropping an unused index is _far_ less headache than creating a missing
 index on a busy database.
4) Without knowing _all_ the details of the app and how it works, my past
 experience is that problems are about a jillion times more likely to be
 the result of underindexing (although I _have_ seen issues due to
 overindexing, so it _does_ happen)

I can't argue that the _best_ course of action is to set up a simulation
that can exercise the system at predicted size and load levels and use that
to tune things. But communicating all that to others has never been easy
in my experience. As recently as this month my team was blown away that I
could create a simulation that demonstrated how my code would behave under
real-world like conditions. It's apparently not something that a lot of
people understand or are good at or something.

-- 
Bill Moran 



Re: initdb when data/ folder has mount points

2018-02-22 Thread Ron Johnson

On 02/22/2018 07:22 AM, David Steele wrote:

On 2/22/18 1:16 AM, Michael Paquier wrote:

On Wed, Feb 21, 2018 at 07:56:38PM -0500, David Steele wrote:

On 2/21/18 7:01 PM, Tom Lane wrote:

For pg_log, just put it somewhere else and set the appropriate
configuration option to say where to write the postmaster log files.
Or you could use a symlink, like the solution for pg_xlog, but
I don't see any advantage there.

Symlinking pg_log is not ideal because the logs end up in the backup. It
gets pretty weird when those logs get restored to a standby and somebody
starts reading them.

log_directory in postgresql.conf san be set up with an absolute
directory value.  So there is no actual need for a symlink with pg_log.
This also reduces the amount of data transfered as part of base
backups without actually needing them.

Yes, I really should have said having pg_log in the data directory at
all is not ideal, symlinked or no.


Is there a Best Practices wiki? 
(https://wiki.postgresql.org/wiki/Detailed_installation_guides is pretty old 
and thin.)



--
Money Angular momentum makes the world go 'round.


Re: initdb when data/ folder has mount points

2018-02-22 Thread David Steele
On 2/22/18 1:16 AM, Michael Paquier wrote:
> On Wed, Feb 21, 2018 at 07:56:38PM -0500, David Steele wrote:
>> On 2/21/18 7:01 PM, Tom Lane wrote:
>>> For pg_log, just put it somewhere else and set the appropriate
>>> configuration option to say where to write the postmaster log files.
>>> Or you could use a symlink, like the solution for pg_xlog, but
>>> I don't see any advantage there.
>>
>> Symlinking pg_log is not ideal because the logs end up in the backup. It
>> gets pretty weird when those logs get restored to a standby and somebody
>> starts reading them.
> 
> log_directory in postgresql.conf san be set up with an absolute
> directory value.  So there is no actual need for a symlink with pg_log.
> This also reduces the amount of data transfered as part of base
> backups without actually needing them. 

Yes, I really should have said having pg_log in the data directory at
all is not ideal, symlinked or no.

-- 
-David
da...@pgmasters.net



signature.asc
Description: OpenPGP digital signature


Re: Building PostgreSQL old version from source to test vulnerability CVE-2017-7546

2018-02-22 Thread Julián Jiménez González
Thanks for your answer Christoph. I've tried to do that but I can't debug
it inside GDB. I configured like so:

CFLAGS="-O0 -g" ./configure --enable-debug

As I found suggested in a SO thread. Then made and installed, and when I
launch gdb either like

gdb --args /usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data

or starting it normally and then attaching GDB (having found the PID
with a select
pg_backend_pid() ):

gdb -p 

I can't get to set a breakpoint in crypt.c, for example. It theoretically
loaded debugging symbols, but I can't find the .c files inside the src/
from where I built Postgres. If I try to add them:

(gdb) add-symbol-file software/postgres_dec6e47/src/backend/libpq/crypt.c
The address where software/postgres_dec6e47/src/backend/libpq/crypt.c has
been loaded is missing

I suspect this has something to do with postgres having to be launched with
user postgres, while gdb not, besides installing it from my normal user and
not postgres. When trying to build it with postgres user, it basically
complains about it not being in sudoers.

On the other hand, I tried to set up logging and I did not get any further
info.

Thanks.


[image: logo_170x100px.png] 

Julián Jiménez González

Investigador - Desarrollador | Área de Servicios y Aplicaciones

Researcher - Developer | Services & Applications Department

Ph. (+34) 986 120 430  Ext. 3021
jjime...@gradiant.org  |  www.gradiant.org

[image: Iconos Redes Sociales GRD Firma email-01]
  [image: Iconos Redes Sociales GRD
Firma email-02]   [image: Iconos Redes
Sociales GRD Firma email-03] 
 [image: Iconos Redes Sociales GRD Firma email-04]


Take care of the environment. Try not to print this email.
The information contained in this email message may be confidential
information, and may also be the subject of legal professional privilege.
If you are not the intended recipient, any use, interference with,
disclosure or copying of this material is unauthorized and prohibited.
Please inform us immediately and destroy the email. Thank you for your
cooperation.


2018-02-22 10:55 GMT+01:00 Christoph Berg :

> Re: Julián Jiménez González 2018-02-21 

Re: Building PostgreSQL old version from source to test vulnerability CVE-2017-7546

2018-02-22 Thread Christoph Berg
Re: Julián Jiménez González 2018-02-21 

Re: Selecting the most recent timestamptz

2018-02-22 Thread Alexander Farber
Hi Ken -

On Thu, Feb 22, 2018 at 10:24 AM, Ken Tanzer  wrote:

> On Thu, Feb 22, 2018 at 1:09 AM, Alexander Farber <
> alexander.far...@gmail.com> wrote:
>
>> Good morning, there are these 2 records in a table:
>>
>> # select m.played, m.mid, m.action, m.gid, m.uid from words_moves m where
>> gid=10;
>> played | mid | action | gid | uid
>> ---+-++-+-
>>  2018-02-19 14:42:08.46222+01  |  12 | play   |  10 |   9
>>  2018-02-20 15:06:01.430634+01 | 216 | expire |  10 |  11
>> (2 rows)
>>
>
> On a related note for the list, I know of at least two other ways to do
> this.  Are any of them better and worse?
>
> SELECT DISTINCT ON (gid) [fields] FROM words_moves m WHERE gid=10 ORDER BY
> gid,played DESC
> SELECT [fields] FROM words_moves m WHERE gid=10 ORDER BY played DESC limit
> 1;
>
>
yes, your suggestions work for me too:

# select m.played, m.mid, m.action, m.gid, m.uid from words_moves m where
gid=10 order by m.played desc limit 1;
played | mid | action | gid | uid
---+-++-+-
 2018-02-20 15:06:01.430634+01 | 216 | expire |  10 |  11
(1 row)

words=> select distinct on (gid) m.played, m.mid, m.action, m.gid, m.uid
from words_moves m where gid=10 order by m.gid, m.played desc;
played | mid | action | gid | uid
---+-++-+-
 2018-02-20 15:06:01.430634+01 | 216 | expire |  10 |  11
(1 row)

# explain select m.played, m.mid, m.action, m.gid, m.uid from words_moves m
where gid=10 order by m.played desc limit 1;
QUERY PLAN
---
 Limit  (cost=27.19..27.19 rows=1 width=29)
   ->  Sort  (cost=27.19..27.19 rows=2 width=29)
 Sort Key: played DESC
 ->  Seq Scan on words_moves m  (cost=0.00..27.18 rows=2 width=29)
   Filter: (gid = 10)
(5 rows)

# explain select distinct on (gid) m.played, m.mid, m.action, m.gid, m.uid
from words_moves m where gid=10 order by m.gid, m.played desc;
QUERY PLAN
---
 Unique  (cost=27.19..27.19 rows=2 width=29)
   ->  Sort  (cost=27.19..27.19 rows=2 width=29)
 Sort Key: played DESC
 ->  Seq Scan on words_moves m  (cost=0.00..27.18 rows=2 width=29)
   Filter: (gid = 10)
(5 rows)

Actually my real query was a bit bigger:

# select
g.finished, g.gid, g.played1, g.played2, g.state1, g.state2, g.score1,
g.score2, m.action
from words_games g left join words_moves m on g.gid=m.gid
and not exists (select 1 from words_moves x where m.gid=x.gid AND x.played
> m.played)
where reason is null and finished is not null;

   finished| gid |played1
|played2| state1 | state2 | score1 | score2 | action
---+-+---+---+++++
 2018-02-19 17:05:03.689277+01 |  72 | 2018-02-19 17:03:57.329402+01 |
2018-02-19 17:05:03.689277+01 | won| lost   |  4 |  0 | resign
 2018-02-19 17:49:40.163458+01 |  63 | 2018-02-19 16:38:18.686566+01 |
2018-02-19 17:49:40.163458+01 | won| lost   |  5 |  0 | resign
 2018-02-19 17:53:47.904488+01 |  89 | 2018-02-19 17:52:20.34824+01  |
2018-02-19 17:53:47.904488+01 | won| lost   |  0 |  0 | resign
 2018-02-19 18:19:42.10843+01  | 102 | 2018-02-19 18:10:03.358555+01 |
2018-02-19 18:19:42.10843+01  | won| lost   | 13 |  0 | resign
 2018-02-19 19:11:25.984277+01 | 117 | 2018-02-19 18:59:40.940102+01 |
2018-02-19 19:11:25.984277+01 | won| lost   | 13 |  0 | resign
 2018-02-19 19:56:11.491049+01 | 128 | 2018-02-19 19:51:40.209479+01 |
2018-02-19 19:56:11.491049+01 | won| lost   |  5 |  0 | resign


Regards
Alex


Re: Selecting the most recent timestamptz

2018-02-22 Thread Ken Tanzer
On Thu, Feb 22, 2018 at 1:09 AM, Alexander Farber <
alexander.far...@gmail.com> wrote:

> Good morning, there are these 2 records in a table:
>
> # select m.played, m.mid, m.action, m.gid, m.uid from words_moves m where
> gid=10;
> played | mid | action | gid | uid
> ---+-++-+-
>  2018-02-19 14:42:08.46222+01  |  12 | play   |  10 |   9
>  2018-02-20 15:06:01.430634+01 | 216 | expire |  10 |  11
> (2 rows)
>
> I try to get the record with the latest timestamp by adding a NOT EXISTS
> condition -
>
> # select m.played, m.mid, m.action, m.gid, m.uid from words_moves m where
> gid=10 and not exists (select 1 from words_moves x where m.mid=x.mid AND
> x.played > m.played);
> played | mid | action | gid | uid
> ---+-++-+-
>  2018-02-19 14:42:08.46222+01  |  12 | play   |  10 |   9
>  2018-02-20 15:06:01.430634+01 | 216 | expire |  10 |  11
> (2 rows)
>
> Why are still 2 records returned? I am probably overlooking something
> simple, sorry...
>
> Thank you
> Alex
>
>
In your example, you have different values for mid.  I'm thinking you meant
gid?

select m.played, m.mid, m.action, m.gid, m.uid from words_moves m where
gid=10 and not exists (select 1 from words_moves x where *m.gid=x.gid* AND
x.played > m.played);

On a related note for the list, I know of at least two other ways to do
this.  Are any of them better and worse?

SELECT DISTINCT ON (gid) [fields] FROM words_moves m WHERE gid=10 ORDER BY
gid,played DESC
SELECT [fields] FROM words_moves m WHERE gid=10 ORDER BY played DESC limit
1;

Cheers,
Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: Selecting the most recent timestamptz

2018-02-22 Thread Alexander Farber
Ahh, thank you Laurenz -

On Thu, Feb 22, 2018 at 10:18 AM, Laurenz Albe 
wrote:

>
> You are only checking if there is a later timestamp *for the same "mid"*.
>
> Since the two rows have different "mid", they are not compared.
>

no it works -

# select m.played, m.mid, m.action, m.gid, m.uid from words_moves m where
gid=10 and not exists (select 1 from words_moves x where m.gid=x.gid AND
x.played > m.played);
played | mid | action | gid | uid
---+-++-+-
 2018-02-20 15:06:01.430634+01 | 216 | expire |  10 |  11
(1 row)


Re: Selecting the most recent timestamptz

2018-02-22 Thread Alexander Farber
s/no it works/now it works/


Re: Selecting the most recent timestamptz

2018-02-22 Thread Laurenz Albe
Alexander Farber wrote:
> Good morning, there are these 2 records in a table:
> 
> # select m.played, m.mid, m.action, m.gid, m.uid from words_moves m where 
> gid=10;
> played | mid | action | gid | uid
> ---+-++-+-
>  2018-02-19 14:42:08.46222+01  |  12 | play   |  10 |   9
>  2018-02-20 15:06:01.430634+01 | 216 | expire |  10 |  11
> (2 rows)
> 
> I try to get the record with the latest timestamp by adding a NOT EXISTS 
> condition -
> 
> # select m.played, m.mid, m.action, m.gid, m.uid from words_moves m where 
> gid=10
> and not exists (select 1 from words_moves x where m.mid=x.mid AND x.played > 
> m.played);
> played | mid | action | gid | uid
> ---+-++-+-
>  2018-02-19 14:42:08.46222+01  |  12 | play   |  10 |   9
>  2018-02-20 15:06:01.430634+01 | 216 | expire |  10 |  11
> (2 rows)
> 
> Why are still 2 records returned? I am probably overlooking something simple, 
> sorry...

You are only checking if there is a later timestamp *for the same "mid"*.

Since the two rows have different "mid", they are not compared.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Selecting the most recent timestamptz

2018-02-22 Thread Alexander Farber
Good morning, there are these 2 records in a table:

# select m.played, m.mid, m.action, m.gid, m.uid from words_moves m where
gid=10;
played | mid | action | gid | uid
---+-++-+-
 2018-02-19 14:42:08.46222+01  |  12 | play   |  10 |   9
 2018-02-20 15:06:01.430634+01 | 216 | expire |  10 |  11
(2 rows)

I try to get the record with the latest timestamp by adding a NOT EXISTS
condition -

# select m.played, m.mid, m.action, m.gid, m.uid from words_moves m where
gid=10 and not exists (select 1 from words_moves x where m.mid=x.mid AND
x.played > m.played);
played | mid | action | gid | uid
---+-++-+-
 2018-02-19 14:42:08.46222+01  |  12 | play   |  10 |   9
 2018-02-20 15:06:01.430634+01 | 216 | expire |  10 |  11
(2 rows)

Why are still 2 records returned? I am probably overlooking something
simple, sorry...

Thank you
Alex

P.S. In case more details are needed -

# explain select m.played, m.mid, m.action, m.gid, m.uid from words_moves m
where gid=10 and not exists ( select 1 from words_moves x where m.mid=x.mid
AND x.played > m.played);
 QUERY PLAN
-
 Nested Loop Anti Join  (cost=0.28..43.78 rows=1 width=29)
   ->  Seq Scan on words_moves m  (cost=0.00..27.18 rows=2 width=29)
 Filter: (gid = 10)
   ->  Index Scan using words_moves_pkey on words_moves x  (cost=0.28..8.29
rows=1 width=16)
 Index Cond: (m.mid = mid)
 Filter: (played > m.played)
(6 rows)

# \d words_moves
 Table "public.words_moves"
 Column |   Type   | Collation | Nullable |
Default
+--+---+--+--
 mid| bigint   |   | not null |
nextval('words_moves_mid_seq'::regclass)
 action | text |   | not null |
 gid| integer  |   | not null |
 uid| integer  |   | not null |
 played | timestamp with time zone |   | not null |
 tiles  | jsonb|   |  |
 score  | integer  |   |  |
Indexes:
"words_moves_pkey" PRIMARY KEY, btree (mid)
Check constraints:
"words_moves_score_check" CHECK (score >= 0)
Foreign-key constraints:
"words_moves_gid_fkey" FOREIGN KEY (gid) REFERENCES words_games(gid) ON
DELETE CASCADE
"words_moves_uid_fkey" FOREIGN KEY (uid) REFERENCES words_users(uid) ON
DELETE CASCADE
Referenced by:
TABLE "words_daily" CONSTRAINT "words_daily_mid_fkey" FOREIGN KEY (mid)
REFERENCES words_moves(mid) ON DELETE CASCADE
TABLE "words_scores" CONSTRAINT "words_scores_mid_fkey" FOREIGN KEY
(mid) REFERENCES words_moves(mid) ON DELETE CASCADE


Re: Not sure if I should CREATE INDEX for text columns on which I plan to filter later

2018-02-22 Thread Alexander Farber
Thank your for the comments