Re: Using DSN Connection and knowing windows username

2018-06-21 Thread Łukasz Jarych
Hmm one problem.

How to distinct value for specific user?
I created new table in postgresql and when user is login into system New
value is added to field "System_user".
But how to show postgresql database that should use the specific value (for
inputing data user, not for others?).

Best,
Luke

2018-06-21 6:15 GMT+02:00 Łukasz Jarych :

> Thank you Adrian,
>
> In the meantime just an idea, but could you capture the system user in a
>> table in Access and use that to pass on to Postgres?
>
>
> Brilliant ! simple and genious!
>
> The purpose of it is to have history log table with DML and DDL changes
> using triggers.
>
> Best,
> Luke
>
>
>
>
> 2018-06-21 0:11 GMT+02:00 Adrian Klaver :
>
>> On 06/20/2018 07:06 AM, Łukasz Jarych wrote:
>>
>>> David G,
>>>
>>> thank you.
>>> Can you confirm if i am thinking correctly ?
>>>
>>> So I can set up authetification to know which user is logged on and use
>>> this as postgresql user?
>>>
>>
>> Only if the system user is a postgres user or can be mapped to one:
>>
>> https://www.postgresql.org/docs/10/static/auth-username-maps.html
>>
>>
>>> But i think it will be not possible to use DSN connection with this.
>>>
>>
>> If you are talking about the ODBC DSN you use to create the linked table
>> in Access then you are correct you are limited to whatever user is
>> specified in the ODBC Manager.
>>
>> It would help to know what you plan to use the user name for?
>>
>> In the meantime just an idea, but could you capture the system user in a
>> table in Access and use that to pass on to Postgres?
>>
>>
>>
>>> Best ,
>>> Luke
>>>
>>> 2018-06-20 15:34 GMT+02:00 David G. Johnston >> >:
>>>
>>> On Wednesday, June 20, 2018, Łukasz Jarych >> > wrote:
>>>
>>> How to know in postgresql which specific windows user is using
>>> database?
>>>
>>> You cannot.  All the server knows is the specific user credentials
>>> it is authenticating.
>>>
>>> That said you can authenticate those credentials in such a way so
>>> that knowing the signed on user you would also know who they are in
>>> any environment that uses the same authentication source - and if
>>> that source supplies their Windows identity you are golden. The
>>> specific setups involved here are outside my experience, though.
>>>
>>> David J.
>>>
>>>
>>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>
>


Re: Using DSN Connection and knowing windows username

2018-06-21 Thread Łukasz Jarych
Hi Guys,

the best option here i think is to:

1. Create users login in Access
2. Use DSN-less odbc connection.
3. When user is login to Access i am taking password and using it in
postgresql BE.

Best,
Luke

2018-06-21 9:17 GMT+02:00 Łukasz Jarych :

> Hmm one problem.
>
> How to distinct value for specific user?
> I created new table in postgresql and when user is login into system New
> value is added to field "System_user".
> But how to show postgresql database that should use the specific value
> (for inputing data user, not for others?).
>
> Best,
> Luke
>
> 2018-06-21 6:15 GMT+02:00 Łukasz Jarych :
>
>> Thank you Adrian,
>>
>> In the meantime just an idea, but could you capture the system user in a
>>> table in Access and use that to pass on to Postgres?
>>
>>
>> Brilliant ! simple and genious!
>>
>> The purpose of it is to have history log table with DML and DDL changes
>> using triggers.
>>
>> Best,
>> Luke
>>
>>
>>
>>
>> 2018-06-21 0:11 GMT+02:00 Adrian Klaver :
>>
>>> On 06/20/2018 07:06 AM, Łukasz Jarych wrote:
>>>
 David G,

 thank you.
 Can you confirm if i am thinking correctly ?

 So I can set up authetification to know which user is logged on and use
 this as postgresql user?

>>>
>>> Only if the system user is a postgres user or can be mapped to one:
>>>
>>> https://www.postgresql.org/docs/10/static/auth-username-maps.html
>>>
>>>
 But i think it will be not possible to use DSN connection with this.

>>>
>>> If you are talking about the ODBC DSN you use to create the linked table
>>> in Access then you are correct you are limited to whatever user is
>>> specified in the ODBC Manager.
>>>
>>> It would help to know what you plan to use the user name for?
>>>
>>> In the meantime just an idea, but could you capture the system user in a
>>> table in Access and use that to pass on to Postgres?
>>>
>>>
>>>
 Best ,
 Luke

 2018-06-20 15:34 GMT+02:00 David G. Johnston <
 david.g.johns...@gmail.com >:

 On Wednesday, June 20, 2018, Łukasz Jarych >>> > wrote:

 How to know in postgresql which specific windows user is using
 database?

 You cannot.  All the server knows is the specific user credentials
 it is authenticating.

 That said you can authenticate those credentials in such a way so
 that knowing the signed on user you would also know who they are in
 any environment that uses the same authentication source - and if
 that source supplies their Windows identity you are golden. The
 specific setups involved here are outside my experience, though.

 David J.



>>>
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>>>
>>
>>
>


How can I stop a long run pgAgent job?

2018-06-21 Thread a
Hi 


I'm using pgAdmin 4, pgAgent and postgresql 10 on windows server.


I tried a job but due to some reasons, its running long time. Is there a way 
that I can terminate it ??


Thanks


Shore

Re: How can I stop a long run pgAgent job?

2018-06-21 Thread Fabio Pardi
Hi Shore,

Have a look at:

https://www.postgresql.org/docs/current/static/functions-admin.html

'pg_terminate_backend' is probably what you are looking for

regards,

fabio pardi


On 21/06/18 11:32, a wrote:
> Hi 
>
> I'm using pgAdmin 4, pgAgent and postgresql 10 on windows server.
>
> I tried a job but due to some reasons, its running long time. Is there a way 
> that I can terminate it ??
>
> Thanks
>
> Shore



Re: Using DSN Connection and knowing windows username

2018-06-21 Thread Adrian Klaver

On 06/21/2018 12:17 AM, Łukasz Jarych wrote:

Hmm one problem.

How to distinct value for specific user?
I created new table in postgresql and when user is login into system New 
value is added to field "System_user".
But how to show postgresql database that should use the specific value 
(for inputing data user, not for others?).


That would depend on where you want to input the user name and for what 
actions?




Best,
Luke




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Using DSN Connection and knowing windows username

2018-06-21 Thread Adrian Klaver

On 06/21/2018 01:33 AM, Łukasz Jarych wrote:

Hi Guys,

the best option here i think is to:

1. Create users login in Access
2. Use DSN-less odbc connection.
3. When user is login to Access i am taking password and using it in 
postgresql BE.


That would work as long as the user name and password used to login into 
Access are also recognized by Postgres.




Best,
Luke



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Using DSN Connection and knowing windows username

2018-06-21 Thread Łukasz Jarych
Hi ,

That would work as long as the user name and password used to login into
> Access are also recognized by Postgres.


I created users in postgresql and connecting to database using windows
username and common password.

Thank you for help.

Best,
Luke

2018-06-21 15:33 GMT+02:00 Adrian Klaver :

> On 06/21/2018 01:33 AM, Łukasz Jarych wrote:
>
>> Hi Guys,
>>
>> the best option here i think is to:
>>
>> 1. Create users login in Access
>> 2. Use DSN-less odbc connection.
>> 3. When user is login to Access i am taking password and using it in
>> postgresql BE.
>>
>
> That would work as long as the user name and password used to login into
> Access are also recognized by Postgres.
>
>
>
>> Best,
>> Luke
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: How can I stop a long run pgAgent job?

2018-06-21 Thread Adam Brusselback
As said, terminating a backend is the current way to kill a job.

An alternative if this is something you do often:
https://github.com/GoSimpleLLC/jpgAgent
jpgAgent supports terminating a job by issuing a NOTIFY command on the
correct channel like this: NOTIFY jpgagent_kill_job, 'job_id_here';
It works well with windows, and all the existing pgAgent admin tools work
to manage it, it's just a drop in replacement for the agent portion.


pgp_sym_decrypt() - error 39000: wrong key or corrupt data

2018-06-21 Thread Moreno Andreo

Hi,
    while playing with pgcrypto I ran into a strange issue (postgresql 
9.5.3 x86 on Windows 7)


Having a table with a field
dateofbirth text

I made the following sequence of SQL commands
update tbl_p set dateofbirth = pgp_sym_encrypt('2018-06-21', 'AES_KEY') 
where codguid = '0001-0001-0001-0001-0001';

OK

select pgp_sym_decrypt(dateofbirth::bytea, 'AES_KEY') as datanasc from 
tbl_p where codguid = '0001-0001-0001-0001-0001'

'2018-06-21'

select * from tab_paz where pgp_sym_decrypt(natoil::bytea, 'AES_KEY') = 
'2018-06-21'

ERROR:  Wrong key or corrupt data
** Error **

ERROR: Wrong key or corrupt data
SQL state: 39000

Can't find reference anywhere...
Any help would be appreciated.
Thanks,
Moreno.-




Re: using pg_basebackup for point in time recovery

2018-06-21 Thread Pierre Timmermans
Hi Michael
On Thursday, June 21, 2018, 7:28:13 AM GMT+2, Michael Paquier 
 wrote:  
 
>You should avoid top-posting on the Postgres lists, this is not the
>usual style used by people around :)

Will do, but Yahoo Mail! does not seem to like that, so I am typing the > myself


>Attached is a patch which includes your suggestion.  What do you think?
>As that's an improvement, only HEAD would get that clarification.

Yes I think it is now perfectly clear. Much appreciated to have the chance to 
contribute to the doc by the way, it is very nice
>Perhaps.  There is really nothing preventing one to add a recovery.conf
>afterwards, which is also why pg_basebackup -R exists.  I do that as
>well for some of the framework I work with and maintain.
I just went to the doc to check about this -R option :-)
Pierre

  

Re: using pg_basebackup for point in time recovery

2018-06-21 Thread Ravi Krishna
> 
> 
> >You should avoid top-posting on the Postgres lists, this is not the
> >usual style used by people around :)
> 
> Will do, but Yahoo Mail! does not seem to like that, so I am typing the > 
> myself
> 

Same here even though I use Mac mail. But it is not yahoo alone. 
Most of the web email clients have resorted to top posting.  I miss the old 
days of Outlook Express which was so '>' friendly.  I think Gmail allows
'>' when you click on the dots to expand the mail you are replying to, but it 
messes
up in justifying and formatting it.

The best for '>':  Unix elm :-)

Re: SQL Query never ending...

2018-06-21 Thread DiasCosta

Hello David and Fabrízio,


The names of the tables and indexes differ from the original script. 
Only the names.


This is the query plan for only 19684 rows.
I have another query running for around 3 rows, but it takes an 
eternity to finish.

If it finishes in acceptable time I'll make it available to you.

As in previous times when trying to optimize, I submitted this execution 
plan to https://explain.depesz.com but now, as it happened then, I am 
not able to extract information to decide me on what to do or to decide 
on a path leading to optimization.


The environment conditions are exactly the same as described in my 
previous message.



Thank you in advance for your attention and help.
They will be greatly appreciated.

Dias Costa
--
***
"QUERY PLAN"
"Nested Loop  (cost=3336.02..3353.51 rows=1 width=528) (actual 
time=867.213..6452673.494 rows=19684 loops=1)"
"  Output: at_2.operador, at_2.num_serie, at_2.titulo, 
n2v_1.titulo_base, (count(*)), tt_km_por_etapa_2017.etapa_km, 
(((count(*)))::numeric * tt_km_por_etapa_2017.etapa_km), 
((sumcount(*)))::numeric * k.etapa_km))) / 
(tt_eotb1.eotb_etapas)::numeric), tr (...)"
"  Join Filter: ((at_2.operador = at_5.operador) AND 
(tt_eotb1.titulo_base = n2v_4.titulo_base))"

"  Rows Removed by Join Filter: 157472"
"  Buffers: local hit=418076253"
"  ->  Nested Loop  (cost=2658.99..2673.26 rows=1 width=782) (actual 
time=744.047..6272023.716 rows=19684 loops=1)"
"    Output: at_2.operador, at_2.num_serie, at_2.titulo, 
n2v_1.titulo_base, (count(*)), at_2.ticket_code, 
at_2.ticket_operator_code, tt_km_por_etapa_2017.etapa_km, 
tt_km_por_etapa_2017.operador, tt_eotb1.eotb_etapas, tt_eotb1.operador, 
tt_eotb1.titulo_b (...)"
"    Join Filter: ((at_2.operador = at_4.operador) AND 
(tt_eotb1.titulo_base = n2v_3.titulo_base))"

"    Rows Removed by Join Filter: 157472"
"    Buffers: local hit=418064955"
"    ->  Nested Loop  (cost=1329.63..1337.01 rows=1 width=686) 
(actual time=369.637..1236.464 rows=19684 loops=1)"
"  Output: at_2.operador, at_2.num_serie, at_2.titulo, 
n2v_1.titulo_base, (count(*)), at_2.ticket_code, 
at_2.ticket_operator_code, tt_km_por_etapa_2017.etapa_km, 
tt_km_por_etapa_2017.operador, tt_eotb1.eotb_etapas, tt_eotb1.operador, 
tt_eotb1.ti (...)"

"  Buffers: local hit=558900"
"  ->  Nested Loop  (cost=1329.49..1336.74 rows=1 width=614) 
(actual time=369.631..1126.109 rows=19684 loops=1)"
"    Output: at_2.operador, at_2.num_serie, at_2.titulo, 
n2v_1.titulo_base, (count(*)), at_2.ticket_code, 
at_2.ticket_operator_code, tt_km_por_etapa_2017.etapa_km, 
tt_km_por_etapa_2017.operador, (sumcount(*)))::numeric * 
k.etapa_km))), a (...)"

"    Buffers: local hit=519532"
"    ->  Nested Loop  (cost=1329.36..1336.47 rows=1 
width=542) (actual time=369.625..1015.389 rows=19684 loops=1)"
"  Output: at_2.operador, at_2.num_serie, 
at_2.titulo, n2v_1.titulo_base, (count(*)), at_2.ticket_code, 
at_2.ticket_operator_code, tt_km_por_etapa_2017.etapa_km, 
tt_km_por_etapa_2017.operador, (sumcount(*)))::numeric * k.etapa_km 
(...)"

"  Buffers: local hit=480164"
"  ->  Nested Loop (cost=1329.22..1336.20 rows=1 
width=470) (actual time=369.614..895.215 rows=19684 loops=1)"
"    Output: at_2.operador, at_2.num_serie, 
at_2.titulo, n2v_1.titulo_base, (count(*)), at_2.ticket_code, 
at_2.ticket_operator_code, tt_km_por_etapa_2017.etapa_km, 
tt_km_por_etapa_2017.operador, (sumcount(*)))::numeric * k.et (...)"

"    Buffers: local hit=440796"
"    ->  Merge Join (cost=1328.95..1333.92 
rows=1 width=358) (actual time=369.586..503.283 rows=19684 loops=1)"
"  Output: at_2.operador, 
at_2.num_serie, at_2.titulo, n2v_1.titulo_base, (count(*)), 
at_2.ticket_code, at_2.ticket_operator_code, n2v_1.cod_titulo, 
(sumcount(*)))::numeric * k.etapa_km))), at_1.operador, n2v.titulo_b 
(...)"
"  Merge Cond: (at_1.operador = 
at_2.operador)"
"  Join Filter: (n2v_1.titulo_base = 
n2v.titulo_base)"

"  Rows Removed by Join Filter: 157472"
"  Buffers: local hit=22563"
"  ->  GroupAggregate 
(cost=672.74..674.98 rows=1 width=96) (actual time=119.552..128.686 
rows=9 loops=1)"
"    Output: at_1.operador, 
n2v.titulo_base, sumcount(*)))::numeric * k.etapa_km))"
"    Group Key: at_1.operador, 
n2v.titulo_base"

"    Buffers: local hit=11295"
"    ->  Merge 

Re: SQL Query never ending...

2018-06-21 Thread Tom Lane
DiasCosta  writes:
> This is the query plan for only 19684 rows.

I think you're getting a bad query plan, mostly as a result of two
factors:

* Poor row estimates.  It looks like the bottom-most misestimations
are on temp tables, which makes me wonder whether you've ANALYZEd
those tables.  Your application has to do that explicitly after
populating the tables; auto-analyze can't help on temp tables.

* Too many tables --- I count 33 table scans in this query.  You
might get better planning results by raising join_collapse_limit
and/or from_collapse_limit, but it will come at a cost in planning
time, and in any case a query with this many tables is never likely
to be cheap.  You might want to think about restructuring your schema
to not need so many tables, or maybe just do some hand optimization
of the query to eliminate unnecessary joins.  (It looks to me like
at least some of the joins to tt_eotb1 might be unnecessary?)

regards, tom lane



Re: pgp_sym_decrypt() - error 39000: wrong key or corrupt data

2018-06-21 Thread Adrian Klaver

On 06/21/2018 08:36 AM, Moreno Andreo wrote:

Hi,
     while playing with pgcrypto I ran into a strange issue (postgresql 
9.5.3 x86 on Windows 7)


Having a table with a field
dateofbirth text

I made the following sequence of SQL commands
update tbl_p set dateofbirth = pgp_sym_encrypt('2018-06-21', 'AES_KEY') 
where codguid = '0001-0001-0001-0001-0001';

OK

select pgp_sym_decrypt(dateofbirth::bytea, 'AES_KEY') as datanasc from 
tbl_p where codguid = '0001-0001-0001-0001-0001'

'2018-06-21'

select * from tab_paz where pgp_sym_decrypt(natoil::bytea, 'AES_KEY') = 
'2018-06-21'


You switched gears above.

What is the data type of the natoil field in table tab_paz?

Was the data encrypted in it using the 'AES_KEY'?

I can replicate the below by doing:

select pgp_sym_decrypt(pgp_sym_encrypt('2018-06-21', 'AES_KEY'), 'AES');
ERROR:  Wrong key or corrupt data



ERROR:  Wrong key or corrupt data
** Error **

ERROR: Wrong key or corrupt data
SQL state: 39000

Can't find reference anywhere...
Any help would be appreciated.
Thanks,
Moreno.-






--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Can PostgreSQL create new WAL files instead of reusing old ones?

2018-06-21 Thread David Pacheco
On Wed, Jun 20, 2018 at 10:35 AM, Jerry Jelinek 
wrote:

> As Dave described in his original email on this topic, we'd like to avoid
> recycling WAL files since that can cause performance issues when we have a
> read-modify-write on a file that has dropped out of the cache.
>
> I have implemented a small change to allow WAL recycling to be disabled.
> It is visible at:
> https://cr.joyent.us/#/c/4263/
>
> I'd appreciate getting any feedback on this.
>
> Thanks,
> Jerry
>
>

For reference, there's more context in this thread from several months ago:
https://www.postgresql.org/message-id/flat/CACukRjO7DJvub8e2AijOayj8BfKK3XXBTwu3KKARiTr67M3E3w%40mail.gmail.com#cacukrjo7djvub8e2aijoayj8bfkk3xxbtwu3kkaritr67m3...@mail.gmail.com

I'll repeat the relevant summary here:

tl;dr: We've found that under many conditions, PostgreSQL's re-use of old
> WAL files appears to significantly degrade query latency on ZFS.  The
> reason is
> complicated and I have details below.  Has it been considered to make this
> behavior tunable, to cause PostgreSQL to always create new WAL files
> instead of re-using old ones?


Thanks,
Dave


Re: using pg_basebackup for point in time recovery

2018-06-21 Thread Vik Fearing
On 21/06/18 07:27, Michael Paquier wrote:
> Attached is a patch which includes your suggestion.  What do you think?
> As that's an improvement, only HEAD would get that clarification.

Say what?  If the clarification applies to previous versions, as it
does, it should be backpatched.  This isn't a change in behavior, it's a
change in the description of existing behavior.
-- 
Vik Fearing  +33 6 46 75 15 36
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support



Re: using pg_basebackup for point in time recovery

2018-06-21 Thread David G. Johnston
On Thu, Jun 21, 2018 at 4:26 PM, Vik Fearing 
wrote:

> On 21/06/18 07:27, Michael Paquier wrote:
> > Attached is a patch which includes your suggestion.  What do you think?
> > As that's an improvement, only HEAD would get that clarification.
>
> Say what?  If the clarification applies to previous versions, as it
> does, it should be backpatched.  This isn't a change in behavior, it's a
> change in the description of existing behavior.
>

Generally only actual bug fixes get back-patched; but I'd have to say this
looks like it could easily be classified as one.

Before: These are backups that cannot be used for PITR
After: These are backups that could be used for PITR if ...

Changing a cannot to a can seems like we are fixing a bug in the
documentation.

Some comments on the patch itself:

"recover up to the wanted recovery point." - "desired recovery point" reads
better to me


"These backups are typically much faster to backup and restore" - "These
backups are typically much faster to create and restore"; avoid repeated
use of the word backup

"but can result as well in larger backup sizes" - "but can result in larger
backup sizes", drop the unnecessary 'as well'

"sizes, so the speed of one method or the other is to evaluate carefully
first" - that is just wrong as-is; suggest just removing it.


To cover the last three items as a whole I'd suggest:

"These backups are typically much faster to create and restore, but
generate larger file sizes, compared to pg_dump."

For the last sentence I'd suggest:

"Note that because WAL cannot be applied on top of a restored pg_dump
backup it is considered a cold backup and cannot be used for
point-in-time-recovery."

I like adding "cold backup" here to help contrast and explain why a base
backup is considered a "hot backup".  The rest is style to make that flow
better.

David J.


Re: Can PostgreSQL create new WAL files instead of reusing old ones?

2018-06-21 Thread Thomas Munro
On Fri, Jun 22, 2018 at 11:22 AM, David Pacheco  wrote:
> On Wed, Jun 20, 2018 at 10:35 AM, Jerry Jelinek 
> wrote:
>> I have implemented a small change to allow WAL recycling to be disabled.
>> It is visible at:
>> https://cr.joyent.us/#/c/4263/
>>
>> I'd appreciate getting any feedback on this.

>> tl;dr: We've found that under many conditions, PostgreSQL's re-use of old
>> WAL files appears to significantly degrade query latency on ZFS.

I haven't tested by it looks reasonable to me.  It needs documentation
in doc/src/sgml/config.sgml.  It should be listed in
src/backend/utils/misc/postgresql.conf.sample.  We'd want a patch
against our master branch.  Could you please register it in
commitfest.postgresql.org so we don't lose track of it?

Hey, a question about PostgreSQL on ZFS: what do you guys think about
pg_flush_data() in fd.c?  It does mmap(), msync(), munmap() to try to
influence writeback?  I wonder if at least on some operating systems
that schlepps a bunch of data out of ZFS ARC into OS page cache, kinda
trashing the latter?

-- 
Thomas Munro
http://www.enterprisedb.com



Re: using pg_basebackup for point in time recovery

2018-06-21 Thread Michael Paquier
On Thu, Jun 21, 2018 at 04:42:00PM -0400, Ravi Krishna wrote:
> Same here even though I use Mac mail. But it is not yahoo alone. 
> Most of the web email clients have resorted to top posting.  I miss
> the old days of Outlook Express which was so '>' friendly.  I think
> Gmail allows '>' when you click on the dots to expand the mail you
> are replying to, but it messes up in justifying and formatting it.

Those products have good practices when it comes to break and redefine
what the concept behind emails is...
--
Michael


signature.asc
Description: PGP signature


Re: using pg_basebackup for point in time recovery

2018-06-21 Thread Michael Paquier
On Thu, Jun 21, 2018 at 04:50:38PM -0700, David G. Johnston wrote:
> Generally only actual bug fixes get back-patched; but I'd have to say
> this looks like it could easily be classified as one.

Everybody is against me here ;)

> Some comments on the patch itself:
> 
> "recover up to the wanted recovery point." - "desired recovery point" reads
> better to me
> 
> 
> "These backups are typically much faster to backup and restore" - "These
> backups are typically much faster to create and restore"; avoid repeated
> use of the word backup

Okay.

> "but can result as well in larger backup sizes" - "but can result in larger
> backup sizes", drop the unnecessary 'as well'

Okay.

> I like adding "cold backup" here to help contrast and explain why a base
> backup is considered a "hot backup".  The rest is style to make that flow
> better.

Indeed.  The section uses hot backups a lot.

What do all folks here think about the updated attached?
--
Michael
diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml
index 982776ca0a..af48aa64c2 100644
--- a/doc/src/sgml/backup.sgml
+++ b/doc/src/sgml/backup.sgml
@@ -1430,12 +1430,15 @@ restore_command = 'cp /mnt/server/archivedir/%f %p'
  Standalone Hot Backups
 
  
-  It is possible to use PostgreSQL's backup facilities to
-  produce standalone hot backups. These are backups that cannot be used
-  for point-in-time recovery, yet are typically much faster to backup and
-  restore than pg_dump dumps.  (They are also much larger
-  than pg_dump dumps, so in some cases the speed advantage
-  might be negated.)
+  It is possible to use PostgreSQL's backup
+  facilities to produce standalone hot backups.  These are backups that
+  could be used for point-in-time recovery if combined with a WAL
+  archive able to recover up to the wanted recovery point.  These backups
+  are typically much faster to create and restore than
+  pg_dump for large deployments but can result
+  in larger backup sizes.  Note also that
+  pg_dump backups cannot be used for
+  point-in-time recovery.
  
 
  


signature.asc
Description: PGP signature