Re: [GENERAL] Detailed progress reporting for "vacuuming indexes" stage

2017-01-23 Thread Michael Paquier
On Mon, Jan 23, 2017 at 8:32 PM, Alexander Shchapov
 wrote:
> There are 7 stages for standard VACUUM process which is reportable via
> this view. For time consuming stages like "scanning heap" or
> "vacuuming heap" you are able to get stage progress idea by doing:
>
> (heap_blks_scanned | heap_blks_vacuumed) / heap_blks_total * 100%
>
> However stage "vacuuming indexes" does not give any idea of how long
> this stage might take. We have some rather big (500M+ records) with
> 5-7 indexes. Sometimes it takes us 20 hours+ to get table vacuumed and
> all progress reporting we have for stage 3 is that it is stage 3.

Yes, things could be improved here.

> According to code
> (https://github.com/postgres/postgres/blob/1d25779284fe1ba08ecd57e647292a9deb241376/src/backend/commands/vacuumlazy.c#L708-L712)
> looks like there might be a way to report number of indexes left to be
> vacuumed.

Indexes may vary in size. For example if a table has a bunch of BRIN
indexes and one huge btree most of the time would be spent on the
btree part. So the number would not make much sense.

> The question is: is it a right thing to expect a patch which adds more
> detailed stage 3 reporting? Say index number being vacuumed.

What would be more interesting would be like the heap to get
information on the index block being cleaned up with reports being
done via index_bulk_delete(). That's more work, and that would be more
helpful than just a number.

So patches welcome.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recovery dump on database with different timezone

2017-01-23 Thread Tom Lane
Adrian Klaver  writes:
> On 01/23/2017 05:14 PM, David G. Johnston wrote:
>> To your example - testing in UTC is going to always result in failure
>> for Z values <= 0 since they will all result in a UTC date of
>> 2011-01-01.  Choosing +06 would result in a passed test.

> That was sort of the point, I was just using the value that the OP said 
> worked:
> "if change 2011-01-01 00:00:03.925+00 to 2011-01-01 00:00:03.925-06 
> works ok"
> I could not see how it did.

Well,

select '2011-01-01 00:00:03.925-06'::timestamptz >= '2011-01-01'::date;

passes if TimeZone is US central time (UTC-6) or anyplace east of there.
It fails west of there, because the "date" value is interpreted as
midnight local time for purposes of comparison to a "timestamptz" value:

regression=# set timezone = EST5EDT;  
SET
regression=# select '2011-01-01 00:00:03.925-06'::timestamptz >= 
'2011-01-01'::date;
 ?column? 
--
 t
(1 row)

regression=# set timezone = PST8PDT;
SET
regression=# select '2011-01-01 00:00:03.925-06'::timestamptz >= 
'2011-01-01'::date;
 ?column? 
--
 f
(1 row)

The key point here is that a CHECK constraint is checked when the row
is stored, and if it depends on any GUC parameters then the
then-prevailing parameter will be used.  So the OP's problem is he has
some rows that passed the constraint based on the TimeZone value that
was active when they were stored, but they don't pass the constraint
when TimeZone is UTC.

If the failing rows are failing because of this side of the range
constraint, they must have been stored under a zone setting east
of UTC.  But it's just as likely that they are failing because of
the other side of the range constraint (the <= 2012-01-01 end),
implying that they were stored under a zone setting west of UTC.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recovery dump on database with different timezone

2017-01-23 Thread Adrian Klaver

On 01/23/2017 05:14 PM, David G. Johnston wrote:

On Mon, Jan 23, 2017 at 6:08 PM, Adrian Klaver
>wrote:


production=# show timezone;
 TimeZone
--
 UTC
(1 row)

production=# select ' 2011-01-01 00:00:03.925-06'::timestamptz  <
'2011-01-01'::date;
 ?column?
--
 f


​What Tom said - though I missed the part about "the timezone when the
data was entered" - I was thinking it was only ​the timezone at the time
of the dump that would impact things.

To your example - testing in UTC is going to always result in failure
for Z values <= 0 since they will all result in a UTC date of
2011-01-01.  Choosing +06 would result in a passed test.


That was sort of the point, I was just using the value that the OP said 
worked:


"if change 2011-01-01 00:00:03.925+00 to 2011-01-01 00:00:03.925-06 
works ok"


I could not see how it did.



David J.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recovery dump on database with different timezone

2017-01-23 Thread David G. Johnston
On Mon, Jan 23, 2017 at 6:08 PM, Adrian Klaver 
wrote:

>
> production=# show timezone;
>  TimeZone
> --
>  UTC
> (1 row)
>
> production=# select ' 2011-01-01 00:00:03.925-06'::timestamptz  <
> '2011-01-01'::date;
>  ?column?
> --
>  f
>

​What Tom said - though I missed the part about "the timezone when the data
was entered" - I was thinking it was only ​the timezone at the time of the
dump that would impact things.

To your example - testing in UTC is going to always result in failure for Z
values <= 0 since they will all result in a UTC date of 2011-01-01.
Choosing +06 would result in a passed test.

David J.


Re: [GENERAL] recovery dump on database with different timezone

2017-01-23 Thread Adrian Klaver

On 01/23/2017 05:01 PM, David G. Johnston wrote:

On Mon, Jan 23, 2017 at 5:57 PM, Adrian Klaver
>wrote:


* this the  constraint:  "time_stamp_201012ad" CHECK
(time_stamp >=
'2010-12-01'::date AND time_stamp < '2011-01-01'::date)

if change 2011-01-01 00:00:03.925+00 to 2011-01-01
00:00:03.925-06  works ok


Not seeing how?:

production=# show timezone;
 TimeZone
--
 UTC
(1 row)


production=# select ' 2011-01-01 00:00:03.925+00'::timestamptz  <
'2011-01-01'::date;
 ?column?
--
 f
(1 row)

production=# select ' 2011-01-01 00:00:03.925+00'::timestamptz  <
'2011-01-01'::date;
 ?column?
--
 f
(1 row)


​You failed to change the timezone to -06 in the second query...


Cut and paste error, what it should have been:

production=# show timezone;
 TimeZone
--
 UTC
(1 row)

production=# select ' 2011-01-01 00:00:03.925-06'::timestamptz  < 
'2011-01-01'::date;

 ?column?
--
 f



David J.
​



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recovery dump on database with different timezone

2017-01-23 Thread Tom Lane
Edmundo Robles  writes:
> * this the  constraint:  "time_stamp_201012ad" CHECK (time_stamp >=
> '2010-12-01'::date AND time_stamp < '2011-01-01'::date)

If that's on a timestamptz column, it's a seriously dangerous constraint,
because the limitations it enforces are dependent on the current timezone
setting (which will determine what is the midnight boundary for those
dates).  It's entirely possible that your database is unrestorable because
different entries in the column were made under different timezone
settings and there is *no* zone value in which all of them will be able to
pass the constraint.  Even if there is such a zone value, you already know
that UTC isn't it.

I'd suggest dropping the constraint in the source database (or editing the
dump file to remove it), restoring the data, and then looking to clean up
the data before you try to put the constraint back on.  And this time,
express it as something like time_stamp >= '2010-12-01 00:00+00' etc ...

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recovery dump on database with different timezone

2017-01-23 Thread David G. Johnston
On Mon, Jan 23, 2017 at 5:57 PM, Adrian Klaver 
wrote:

>
> * this the  constraint:  "time_stamp_201012ad" CHECK (time_stamp >=
>> '2010-12-01'::date AND time_stamp < '2011-01-01'::date)
>>
>> if change 2011-01-01 00:00:03.925+00 to 2011-01-01
>> 00:00:03.925-06  works ok
>>
>
> Not seeing how?:
>
> production=# show timezone;
>  TimeZone
> --
>  UTC
> (1 row)
>
>
> production=# select ' 2011-01-01 00:00:03.925+00'::timestamptz  <
> '2011-01-01'::date;
>  ?column?
> --
>  f
> (1 row)
>
> production=# select ' 2011-01-01 00:00:03.925+00'::timestamptz  <
> '2011-01-01'::date;
>  ?column?
> --
>  f
> (1 row)
>

​You failed to change the timezone to -06 in the second query...

David J.
​


Re: [GENERAL] recovery dump on database with different timezone

2017-01-23 Thread Adrian Klaver

On 01/23/2017 04:30 PM, Edmundo Robles wrote:








* at last, i did 'pg_restore -l backup.dmp| sed "s:+00:-06:g"|psql


The above is not the complete command, correct?

 the complete command is:
pg_restore backup.dmp| sed "s:+00:-06:g"|psql database



Another thought, do you have a .psqlrc file that is changing the timezone?



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recovery dump on database with different timezone

2017-01-23 Thread David G. Johnston
On Mon, Jan 23, 2017 at 4:45 PM, Edmundo Robles 
wrote:

> * This is  the message: COPY failed for table "stlm_201012ad": ERROR:  new
> row for relation "stlm_201012ad" violates check constraint
> "time_stamp_201012ad"
> DETAIL:  Failing row contains (..., 2011-01-01 00:00:03.925+00, .., 0).
>
> * this the  constraint:  "time_stamp_201012ad" CHECK (time_stamp >=
> '2010-12-01'::date AND time_stamp < '2011-01-01'::date)
> ​​
>

​I'm not that fluent experimenting with dump/restore mechanics (hence the
uncertainty below) but...​

​Your record only passes the check constraint if the system is in a
timezone West of UTC.  Because then at least 1 hour is subtracted from the
supplied timestamp before converting it into date for comparison.

If you set the target database timezone to be the proper designation for
"localtime" the restoration might work.

Your claim that the source system was setup for UTC is suspect...

David J.


Re: [GENERAL] recovery dump on database with different timezone

2017-01-23 Thread Adrian Klaver

On 01/23/2017 04:30 PM, Edmundo Robles wrote:



On Mon, Jan 23, 2017 at 6:02 PM, Adrian Klaver
> wrote:

On 01/23/2017 03:45 PM, Edmundo Robles wrote:

Hi!

I have  backed up  a database with a 'UTC' timezone, but i can
not  to
restore it on a database with 'localtime'  timezone.


Dealing with time is complex, so some inline questions below to try
to figure out exactly what the setup is.


 I have a time constraints, the reason of the mistakes.
* First, I did:  'alter database  set timezone='UTC'; 'but no
restore it...


On the database you took the backup from or the one you are
restoring to?



  I did alter the  destination database.





* After  I  did change  'timezone' to UTC in postgresql.conf  and
restart  , but  :(


>From the source(backup) Postgres instance or the
destination(restore) instance?

I did  change on destination server configuration.



* at last, i did 'pg_restore -l backup.dmp| sed "s:+00:-06:g"|psql


The above is not the complete command, correct?

 the complete command is:
pg_restore backup.dmp| sed "s:+00:-06:g"|psql database


If you output the dump file to plain text:

pg_restore -f backup.sql backup.dmp

is there something in there setting the timezone to something other then 
UTC?





* this the  constraint:  "time_stamp_201012ad" CHECK (time_stamp >=
'2010-12-01'::date AND time_stamp < '2011-01-01'::date)

if change 2011-01-01 00:00:03.925+00 to 2011-01-01
00:00:03.925-06  works ok


Not seeing how?:

production=# show timezone;
 TimeZone
--
 UTC
(1 row)


production=# select ' 2011-01-01 00:00:03.925+00'::timestamptz  < 
'2011-01-01'::date;

 ?column?
--
 f
(1 row)

production=# select ' 2011-01-01 00:00:03.925+00'::timestamptz  < 
'2011-01-01'::date;

 ?column?
--
 f
(1 row)


--



 the  date type  is with timezone or not?


--
Adrian Klaver
adrian.kla...@aklaver.com 




--




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recovery dump on database with different timezone

2017-01-23 Thread Edmundo Robles
On Mon, Jan 23, 2017 at 6:02 PM, Adrian Klaver 
wrote:

> On 01/23/2017 03:45 PM, Edmundo Robles wrote:
>
>> Hi!
>>
>> I have  backed up  a database with a 'UTC' timezone, but i can not  to
>> restore it on a database with 'localtime'  timezone.
>>
>
> Dealing with time is complex, so some inline questions below to try to
> figure out exactly what the setup is.
>
>
>>  I have a time constraints, the reason of the mistakes.
>> * First, I did:  'alter database  set timezone='UTC'; 'but no
>> restore it...
>>
>
> On the database you took the backup from or the one you are restoring to?



>   I did alter the  destination database.



>
>
> * After  I  did change  'timezone' to UTC in postgresql.conf  and
>> restart  , but  :(
>>
>
> From the source(backup) Postgres instance or the destination(restore)
> instance?

I did  change on destination server configuration.

>
>
> * at last, i did 'pg_restore -l backup.dmp| sed "s:+00:-06:g"|psql
>>
>
> The above is not the complete command, correct?
>
 the complete command is:
pg_restore backup.dmp| sed "s:+00:-06:g"|psql database


>
> What is the complete command?
>
> database  but copy fails with some records, so some tables are  empty :(
>> * i will try  replacing +00 by  empty string, but ...
>>
>
> In psql and in the receiving database, what does:
>
> show timezone;
>
> return?
>

After change  configuration,  returns UTC.

>
> Are the timestamp fields you are COPYing into declared with timezone or
> without timezone?
>
>the field is declared with timezone.

>
>
>> Why  I can not restore it?
>>
>> * I have using postgresql 9.4
>> * The backup  is made it with  pg_dump -Fc and
>> * I try to restore with pg_restore -d database --disable-triggers
>>
>> * This is  the message: COPY failed for table "stlm_201012ad": ERROR:
>>  new row for relation "stlm_201012ad" violates check constraint
>> "time_stamp_201012ad"
>> DETAIL:  Failing row contains (..., 2011-01-01 00:00:03.925+00, .., 0).
>>
>> * this the  constraint:  "time_stamp_201012ad" CHECK (time_stamp >=
>> '2010-12-01'::date AND time_stamp < '2011-01-01'::date)
>>
>> if change 2011-01-01 00:00:03.925+00 to 2011-01-01 00:00:03.925-06  works
>> ok
>> --
>>
>>
>
 the  date type  is with timezone or not?

>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>



--


Re: [GENERAL] pg_dump Conflict with recovery

2017-01-23 Thread julyanto SUTANDANG
On Tue, Jan 24, 2017 at 5:45 AM, Israel Brewster 
wrote:

> On Jan 23, 2017, at 10:06 AM, Jerry Sievers 
> wrote:
>
>
> Israel Brewster  writes:
>
> I have a backup strategy that in part consists of doing pg_dump s on my
> various databases. In order to hopefully reduce/prevent operational
> slow-down as a result of the
> backup, I do the dumps from my secondary server, configured as a hot
> standby with streaming replication.
>
> In general this works fine, but one of my databases has now grown to the
> point that often as not I get the following when trying to dump the
> database:
>
> ERROR:  canceling statement due to conflict with recovery
> DETAIL:  User was holding a relation lock for too long.
>
> As I understand it, this is due to the pg_dump taking longer than the
> max_standby_streaming_delay of 180s, and as such could be easily fixed by
> upping that value in the
> config. But is that the "right" fix? Or is there a "better" way?
>
> You can try to use *pg_basebackup* rather than only pg_dump.. it is
faster, and more approriate.
actually pg_basebackup is a command tool also for making replicas.

Regards,

Julyanto SUTANDANG

Equnix Business Solutions, PT
(An Open Source and Open Mind Company)
www.equnix.co.id
Pusat Niaga ITC Roxy Mas Blok C2/42.  Jl. KH Hasyim Ashari 125, Jakarta
Pusat
T: +6221 2282 F: +62216315281 M: +628164858028


Caution: The information enclosed in this email (and any attachments) may
be legally privileged and/or confidential and is intended only for the use
of the addressee(s). No addressee should forward, print, copy, or otherwise
reproduce this message in any manner that would allow it to be viewed by
any individual not originally listed as a recipient. If the reader of this
message is not the intended recipient, you are hereby notified that any
unauthorized disclosure, dissemination, distribution, copying or the taking
of any action in reliance on the information herein is strictly prohibited.
If you have received this communication in error, please immediately notify
the sender and delete this message.Unless it is made by the authorized
person, any views expressed in this message are those of the individual
sender and may not necessarily reflect the views of PT Equnix Business
Solutions.

>
> "Best" way depends on your needs...
>
> You can pause your standby and/or configure settings like the one you
> mentioned to  tolerate the dump conflicting with replication by
> allowing the standby to lag rather than issuing an cancel.
>
> select pg_xlog_replay_pause();
> -- dump here
> select pg_xlog_replay_resume();
>
> The above will of course guarantee that your slave lags vs fiddling with
> the max delay settings and being then subject to possibly moving target
> in terms of dump duration and upstream system behavior.
>
>
> Sounds reasonable. Allows for however long the dump process needs without
> allowing for runaway queries in the general case, as long as I make sure to
> implement things in a way that makes sure the pg_xlog_replay_resume() is
> always called, no matter what happens (not that I've had any failures, I
> just tend to be a bit paranoid about this system).
>
> Thanks!
> ---
> Israel Brewster
> Systems Analyst II
> Ravn Alaska
> 5245 Airport Industrial Rd
> Fairbanks, AK 99709
> (907) 450-7293
> ---
>
>
>
>
> ---
> Israel Brewster
> Systems Analyst II
> Ravn Alaska
> 5245 Airport Industrial Rd
> Fairbanks, AK 99709
> (907) 450-7293
> ---
>
>
>
>
> --
> Jerry Sievers
> Postgres DBA/Development Consulting
> e: postgres.consult...@comcast.net
> p: 312.241.7800
>
>
>


Re: [GENERAL] recovery dump on database with different timezone

2017-01-23 Thread Adrian Klaver

On 01/23/2017 03:45 PM, Edmundo Robles wrote:

Hi!

I have  backed up  a database with a 'UTC' timezone, but i can not  to
restore it on a database with 'localtime'  timezone.


Dealing with time is complex, so some inline questions below to try to 
figure out exactly what the setup is.




 I have a time constraints, the reason of the mistakes.
* First, I did:  'alter database  set timezone='UTC'; 'but no
restore it...


On the database you took the backup from or the one you are restoring to?


* After  I  did change  'timezone' to UTC in postgresql.conf  and
restart  , but  :(


From the source(backup) Postgres instance or the destination(restore) 
instance?



* at last, i did 'pg_restore -l backup.dmp| sed "s:+00:-06:g"|psql


The above is not the complete command, correct?

What is the complete command?


database  but copy fails with some records, so some tables are  empty :(
* i will try  replacing +00 by  empty string, but ...


In psql and in the receiving database, what does:

show timezone;

return?

Are the timestamp fields you are COPYing into declared with timezone or 
without timezone?




Why  I can not restore it?

* I have using postgresql 9.4
* The backup  is made it with  pg_dump -Fc and
* I try to restore with pg_restore -d database --disable-triggers

* This is  the message: COPY failed for table "stlm_201012ad": ERROR:
 new row for relation "stlm_201012ad" violates check constraint
"time_stamp_201012ad"
DETAIL:  Failing row contains (..., 2011-01-01 00:00:03.925+00, .., 0).

* this the  constraint:  "time_stamp_201012ad" CHECK (time_stamp >=
'2010-12-01'::date AND time_stamp < '2011-01-01'::date)

if change 2011-01-01 00:00:03.925+00 to 2011-01-01 00:00:03.925-06  works ok
--




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] recovery dump on database with different timezone

2017-01-23 Thread Edmundo Robles
Hi!

I have  backed up  a database with a 'UTC' timezone, but i can not  to
restore it on a database with 'localtime'  timezone.

 I have a time constraints, the reason of the mistakes.
* First, I did:  'alter database  set timezone='UTC'; 'but no restore
it...
* After  I  did change  'timezone' to UTC in postgresql.conf  and restart
 , but  :(
* at last, i did 'pg_restore -l backup.dmp| sed "s:+00:-06:g"|psql database
 but copy fails with some records, so some tables are  empty :(
* i will try  replacing +00 by  empty string, but ...

Why  I can not restore it?

* I have using postgresql 9.4
* The backup  is made it with  pg_dump -Fc and
* I try to restore with pg_restore -d database --disable-triggers

* This is  the message: COPY failed for table "stlm_201012ad": ERROR:  new
row for relation "stlm_201012ad" violates check constraint
"time_stamp_201012ad"
DETAIL:  Failing row contains (..., 2011-01-01 00:00:03.925+00, .., 0).

* this the  constraint:  "time_stamp_201012ad" CHECK (time_stamp >=
'2010-12-01'::date AND time_stamp < '2011-01-01'::date)

if change 2011-01-01 00:00:03.925+00 to 2011-01-01 00:00:03.925-06  works ok
--


Re: [GENERAL] pg_dump Conflict with recovery

2017-01-23 Thread Israel Brewster
On Jan 23, 2017, at 10:06 AM, Jerry Sievers  wrote:
> 
> Israel Brewster  writes:
> 
>> I have a backup strategy that in part consists of doing pg_dump s on my 
>> various databases. In order to hopefully reduce/prevent operational 
>> slow-down as a result of the
>> backup, I do the dumps from my secondary server, configured as a hot standby 
>> with streaming replication.
>> 
>> In general this works fine, but one of my databases has now grown to the 
>> point that often as not I get the following when trying to dump the database:
>> 
>> ERROR:  canceling statement due to conflict with recovery
>> DETAIL:  User was holding a relation lock for too long.
>> 
>> As I understand it, this is due to the pg_dump taking longer than the 
>> max_standby_streaming_delay of 180s, and as such could be easily fixed by 
>> upping that value in the
>> config. But is that the "right" fix? Or is there a "better" way? 
> 
> "Best" way depends on your needs...
> 
> You can pause your standby and/or configure settings like the one you
> mentioned to  tolerate the dump conflicting with replication by
> allowing the standby to lag rather than issuing an cancel.
> 
> select pg_xlog_replay_pause();
> -- dump here
> select pg_xlog_replay_resume();
> 
> The above will of course guarantee that your slave lags vs fiddling with
> the max delay settings and being then subject to possibly moving target
> in terms of dump duration and upstream system behavior.

Sounds reasonable. Allows for however long the dump process needs without 
allowing for runaway queries in the general case, as long as I make sure to 
implement things in a way that makes sure the pg_xlog_replay_resume() is always 
called, no matter what happens (not that I've had any failures, I just tend to 
be a bit paranoid about this system).

Thanks!
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---

> 
> 
> 
>> ---
>> Israel Brewster
>> Systems Analyst II
>> Ravn Alaska
>> 5245 Airport Industrial Rd
>> Fairbanks, AK 99709
>> (907) 450-7293
>> ---
>> 
>> 
>> 
> 
> -- 
> Jerry Sievers
> Postgres DBA/Development Consulting
> e: postgres.consult...@comcast.net
> p: 312.241.7800



[GENERAL] Why is materialized view creation a "security-restricted operation"?

2017-01-23 Thread Joshua Chamberlain
Hello,

I see this has been discussed briefly before[1], but I'm still not clear on
what's happening and why.

I wrote a function that uses temporary tables in generating a result set. I
can use it when creating tables or views, e.g.,
CREATE TABLE some_table AS SELECT * FROM my_func();
CREATE VIEW some_view AS SELECT * FROM my_func();

But creating a materialized view fails:
CREATE MATERIALIZED VIEW some_view AS SELECT * FROM my_func();
ERROR:  cannot create temporary table within security-restricted operation

The docs explain that this is expected[2], but not why. On the contrary,
this is actually quite surprising to me, given that tables and views work
just fine. What makes a materialized view so different? Are there any plans
to make this more consistent?

Thanks for any help you can provide.

Regards,
Joshua Chamberlain

[1]
https://www.postgresql.org/message-id/CAFjFpRcz3qKQFQo3RynfPinXdOp_42Tz%2BxCqBQdAoe061bMRSw%40mail.gmail.com
[2]
https://www.postgresql.org/docs/9.3/static/sql-creatematerializedview.html


Re: [GENERAL] pg_dump Conflict with recovery

2017-01-23 Thread Jerry Sievers
Israel Brewster  writes:

> I have a backup strategy that in part consists of doing pg_dump s on my 
> various databases. In order to hopefully reduce/prevent operational slow-down 
> as a result of the
> backup, I do the dumps from my secondary server, configured as a hot standby 
> with streaming replication.
>
> In general this works fine, but one of my databases has now grown to the 
> point that often as not I get the following when trying to dump the database:
>
> ERROR:  canceling statement due to conflict with recovery
> DETAIL:  User was holding a relation lock for too long.
>
> As I understand it, this is due to the pg_dump taking longer than the 
> max_standby_streaming_delay of 180s, and as such could be easily fixed by 
> upping that value in the
> config. But is that the "right" fix? Or is there a "better" way? 

"Best" way depends on your needs...

You can pause your standby and/or configure settings like the one you
mentioned to  tolerate the dump conflicting with replication by
allowing the standby to lag rather than issuing an cancel.

select pg_xlog_replay_pause();
-- dump here
select pg_xlog_replay_resume();

The above will of course guarantee that your slave lags vs fiddling with
the max delay settings and being then subject to possibly moving target
in terms of dump duration and upstream system behavior.



> ---
> Israel Brewster
> Systems Analyst II
> Ravn Alaska
> 5245 Airport Industrial Rd
> Fairbanks, AK 99709
> (907) 450-7293
> ---
>
>
>

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net
p: 312.241.7800


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does this hot standy archive_command work

2017-01-23 Thread Jerry Sievers
"bto...@computer.org"  writes:

> - Original Message -
>> From: "Jerry Sievers" 
>> To: "Steve Atkins" 
>> Cc: "pgsql-general" 
>> Sent: Monday, January 23, 2017 12:52:46 PM
>> Subject: Re: [GENERAL] Why does this hot standy archive_command work
>> 
>> Steve Atkins  writes:
>> 
>> >> On Jan 20, 2017, at 7:03 PM, bto...@computer.org 
>> >> wrote:
>> >> 
>> >> While learning a bit about basic hot standby configuration, I was
>> >> reviewing an article that used these parameters
>> >> 
>> >> wal_level = 'hot_standby'
>> >> archive_mode = on
>> >> archive_command = 'cd .'
>> 
>> That's just a very silly way of making archive_command a true no-op...
>> 
>> I would have set it to '/bin/true'
>
>
> Roger that. 
>
> Also, I guess I should have maybe been more clear what I meant by "it works" 
> ... the hot standby replication works, which is really what I was focusing 
> on. 
>
> I think I might have previously been under the mis-impression that
> archiving the WAL to ... somewhere ... was required for hot stand-by,
> so the no-op threw me since the stand-by configuration does not seem
> to specify where to get the primary WAL from. It seems like what is
> happening here is that since the primary archive, as you point out, is
> no-op, and the standby server knows where to just grab the WAL
> directly.

U...

Well your master does not send WALs to any remote repo which means
backing it up with any intent of doing PITR is  not an option.

If your standby is streaming from a replication slot (only supported in
newer releases) vs standard streaming, then your master will retain WALs
indefinitely if your standby is down or paused... and it will always be
possible to catch up the standby no matter how long it's down.

OTOH, not using rep slot and *also* not using a remote archive repo and
once your standby is down long enough or falls behind for any reason
beyond what the master still has retained in WALs based upon settings
like wal_keep_segments and/or  min_wal_size...

You may lose your standby and have to refresh it from a basebackup since
the needed WALs won't exist anywhere.

There are a lot of configuration choices but perhaps the most common one
is a hybrid of streaming with fallback to WAL fetching from a remote
repo and indeed at my site this is required generally for robustness.

HTH

> Thanks for replying!
>
>
>> 
>> 
>> 
>> >> max_wal_senders = 1
>> >> hot_standby = on
>> >> 
>> >> 
>> >> How or why that particular archive_command actually works (... and it does
>> >> ... I tried it ...) is not clear to me based on reading of the Postgresql
>> >> documentation on this topic. I would have expected to see an actual copy
>> >> or rsync command, as described in the fine manual at section 25.3.1.
>> >> "Setting Up WAL Archiving"
>> >> 
>> >> The entire example appears at
>> >> 
>> >> https://www.digitalocean.com/community/tutorials/how-to-set-up-master-slave-replication-on-postgresql-on-an-ubuntu-12-04-vps
>> >> 
>> >> Can anyone enlighten on this topic, or provide a link to an existing
>> >> explanation?
>> >
>> > It's not archiving logs at all, instead relying on streaming them directly
>> > to the slave.
>> >
>> > Changing archive_mode requires a server restart, while changing
>> > archive_command from a command that does nothing, successfully, to a
>> > command that actually archives logs just requires a reload. So this lets
>> > you enable archiving without halting the server by changing the command.
>> >
>> > Or that's how I vaguely recall it working some years ago. Things may have
>> > changed now - you're following a very old tutorial.
>> >
>> > Cheers,
>> >   Steve
>> 
>> --
>> Jerry Sievers
>> Postgres DBA/Development Consulting
>> e: postgres.consult...@comcast.net
>> p: 312.241.7800
>> 
>> 
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>> 
>> 

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net
p: 312.241.7800


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dump Conflict with recovery

2017-01-23 Thread Israel Brewster
I have a backup strategy that in part consists of doing pg_dump s on my various databases. In order to hopefully reduce/prevent operational slow-down as a result of the backup, I do the dumps from my secondary server, configured as a hot standby with streaming replication.In general this works fine, but one of my databases has now grown to the point that often as not I get the following when trying to dump the database:ERROR:  canceling statement due to conflict with recoveryDETAIL:  User was holding a relation lock for too long.As I understand it, this is due to the pg_dump taking longer than the max_standby_streaming_delay of 180s, and as such could be easily fixed by upping that value in the config. But is that the "right" fix? Or is there a "better" way? 
---Israel BrewsterSystems Analyst IIRavn Alaska5245 Airport Industrial RdFairbanks, AK 99709(907) 450-7293---BEGIN:VCARD
VERSION:3.0
N:Brewster;Israel;;;
FN:Israel Brewster
ORG:Frontier Flying Service;MIS
TITLE:PC Support Tech II
EMAIL;type=INTERNET;type=WORK;type=pref:isr...@frontierflying.com
TEL;type=WORK;type=pref:907-450-7293
item1.ADR;type=WORK;type=pref:;;5245 Airport Industrial Wy;Fairbanks;AK;99701;
item1.X-ABADR:us
CATEGORIES:General
X-ABUID:36305438-95EA-4410-91AB-45D16CABCDDC\:ABPerson
END:VCARD




Re: [GENERAL] Why does this hot standy archive_command work

2017-01-23 Thread bto...@computer.org


- Original Message -
> From: "Jerry Sievers" 
> To: "Steve Atkins" 
> Cc: "pgsql-general" 
> Sent: Monday, January 23, 2017 12:52:46 PM
> Subject: Re: [GENERAL] Why does this hot standy archive_command work
> 
> Steve Atkins  writes:
> 
> >> On Jan 20, 2017, at 7:03 PM, bto...@computer.org 
> >> wrote:
> >> 
> >> While learning a bit about basic hot standby configuration, I was
> >> reviewing an article that used these parameters
> >> 
> >> wal_level = 'hot_standby'
> >> archive_mode = on
> >> archive_command = 'cd .'
> 
> That's just a very silly way of making archive_command a true no-op...
> 
> I would have set it to '/bin/true'


Roger that. 

Also, I guess I should have maybe been more clear what I meant by "it works" 
... the hot standby replication works, which is really what I was focusing on. 

I think I might have previously been under the mis-impression that archiving 
the WAL to ... somewhere ... was required for hot stand-by, so the no-op threw 
me since the stand-by configuration does not seem to specify where to get the 
primary WAL from. It seems like what is happening here is that since the 
primary archive, as you point out, is no-op, and the standby server knows where 
to just grab the WAL directly. 

Thanks for replying!


> 
> 
> 
> >> max_wal_senders = 1
> >> hot_standby = on
> >> 
> >> 
> >> How or why that particular archive_command actually works (... and it does
> >> ... I tried it ...) is not clear to me based on reading of the Postgresql
> >> documentation on this topic. I would have expected to see an actual copy
> >> or rsync command, as described in the fine manual at section 25.3.1.
> >> "Setting Up WAL Archiving"
> >> 
> >> The entire example appears at
> >> 
> >> https://www.digitalocean.com/community/tutorials/how-to-set-up-master-slave-replication-on-postgresql-on-an-ubuntu-12-04-vps
> >> 
> >> Can anyone enlighten on this topic, or provide a link to an existing
> >> explanation?
> >
> > It's not archiving logs at all, instead relying on streaming them directly
> > to the slave.
> >
> > Changing archive_mode requires a server restart, while changing
> > archive_command from a command that does nothing, successfully, to a
> > command that actually archives logs just requires a reload. So this lets
> > you enable archiving without halting the server by changing the command.
> >
> > Or that's how I vaguely recall it working some years ago. Things may have
> > changed now - you're following a very old tutorial.
> >
> > Cheers,
> >   Steve
> 
> --
> Jerry Sievers
> Postgres DBA/Development Consulting
> e: postgres.consult...@comcast.net
> p: 312.241.7800
> 
> 
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 
> 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does this hot standy archive_command work

2017-01-23 Thread Jerry Sievers
Steve Atkins  writes:

>> On Jan 20, 2017, at 7:03 PM, bto...@computer.org  
>> wrote:
>> 
>> While learning a bit about basic hot standby configuration, I was reviewing 
>> an article that used these parameters
>> 
>> wal_level = 'hot_standby'
>> archive_mode = on
>> archive_command = 'cd .'

That's just a very silly way of making archive_command a true no-op...

I would have set it to '/bin/true'



>> max_wal_senders = 1
>> hot_standby = on
>> 
>> 
>> How or why that particular archive_command actually works (... and it does 
>> ... I tried it ...) is not clear to me based on reading of the Postgresql 
>> documentation on this topic. I would have expected to see an actual copy or 
>> rsync command, as described in the fine manual at section 25.3.1. "Setting 
>> Up WAL Archiving"
>> 
>> The entire example appears at 
>> 
>> https://www.digitalocean.com/community/tutorials/how-to-set-up-master-slave-replication-on-postgresql-on-an-ubuntu-12-04-vps
>> 
>> Can anyone enlighten on this topic, or provide a link to an existing 
>> explanation?
>
> It's not archiving logs at all, instead relying on streaming them directly to 
> the slave.
>
> Changing archive_mode requires a server restart, while changing 
> archive_command from a command that does nothing, successfully, to a command 
> that actually archives logs just requires a reload. So this lets you enable 
> archiving without halting the server by changing the command.
>
> Or that's how I vaguely recall it working some years ago. Things may have 
> changed now - you're following a very old tutorial.
>
> Cheers,
>   Steve

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net
p: 312.241.7800


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is the purpose of PostGIS on PostgreSQL?

2017-01-23 Thread Adrian Klaver

On 01/23/2017 09:08 AM, Kased, Razy (Chemistry and Geosciences) wrote:

I recently came across this question: "What is the purpose of PostGIS on
PostgreSQL?"  and wanted to
know what this mailing list had to respond with.


See the paragraph under the heading below:

http://postgis.org/

What is PostGIS?




​Thanks,






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] What is the purpose of PostGIS on PostgreSQL?

2017-01-23 Thread Kased, Razy (Chemistry and Geosciences)
I recently came across this question: "What is the purpose of PostGIS on 
PostgreSQL?" and wanted to know 
what this mailing list had to respond with.


?Thanks,




[GENERAL] FTS prefix search - put full hits first

2017-01-23 Thread cen

Hi

I have this FTS query:

SELECT * FROM receivers r WHERE r.tsv @@ unaccent('john:*')::tsquery 
ORDER BY ts_rank(r.tsv, unaccent('john:*')::tsquery) DESC;


Is there any way to tell FTS to put records with "John" first and others 
after that (Johhny, Johnson etc)?


Basically, I want to have full word hits on top.


Best regards, Klemen



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Detailed progress reporting for "vacuuming indexes" stage

2017-01-23 Thread Alexander Shchapov
Hi all,

We've recently upgraded to 9.6.1 and now playing with newly available
pg_stat_progress_vacuum view.

There are 7 stages for standard VACUUM process which is reportable via
this view. For time consuming stages like "scanning heap" or
"vacuuming heap" you are able to get stage progress idea by doing:

(heap_blks_scanned | heap_blks_vacuumed) / heap_blks_total * 100%

However stage "vacuuming indexes" does not give any idea of how long
this stage might take. We have some rather big (500M+ records) with
5-7 indexes. Sometimes it takes us 20 hours+ to get table vacuumed and
all progress reporting we have for stage 3 is that it is stage 3.

According to code
(https://github.com/postgres/postgres/blob/1d25779284fe1ba08ecd57e647292a9deb241376/src/backend/commands/vacuumlazy.c#L708-L712)
looks like there might be a way to report number of indexes left to be
vacuumed.

The question is: is it a right thing to expect a patch which adds more
detailed stage 3 reporting? Say index number being vacuumed.

Thanks.

-- 
Alexander Shchapov


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general