[HACKERS] Re: pg_basebackup: could not get transaction log end position from server: FATAL: could not open file ./pg_hba.conf~: Permission denied

2014-05-16 Thread David G Johnston
Andres Freund-3 wrote
 On 2014-05-16 18:29:25 +0200, Magnus Hagander wrote:
 On Fri, May 16, 2014 at 6:25 PM, Andres Freund lt;

 andres@

 gt;wrote:
 
  On 2014-05-16 18:20:35 +0200, Magnus Hagander wrote:
   On Fri, May 16, 2014 at 5:46 PM, Joshua D. Drake lt;

 jd@

 gt;  wrote:
  
At a minimum:
   
Check to see if there is going to be a permission error BEFORE the
 base
backup begins:
   
starting basebackup:
  checking perms: ERROR no access to pg_hba.conf~ base backup will
 fail
  
  
   That's pretty much what it does if you enable progress meter. I
 realize
  you
   don't necessarily want that one, but we could have a switch that
 still
   tells the server to measure the size, but not actually print the
 output?
   While it costs a bit of overhead to do that, that's certainly
 something
   that's a lot more safe than ignoring errors.
 
  Don't think it'll show you that error - that mode only stats() files,
  right? So you'd need to add access() or open()s.
 
 
 You're right, we don't. I thought we did, but was clearly remembering
 wrong.
 
 I guess we could add an access() call to that codepath though. Not sure
 if
 that's going to cause any real overhead compared to the rest of what
 we're
 doing anyway?
 
 It's not free. But I don't think it'd seriously matter in comparison.
 
 But it doesn't protect you if the file is created during the backup -
 which as you know can take a long time. For example because somebody
 felt the need to increase wal_keep_segments.
 
 Greetings,
 
 Andres Freund

Can we simply backup the non-data parts of $PGDATA first then move onto the
data-parts?  For the files that we'd be dealing with it would be
sufficiently quick to just try and fail, immediately, then check for all
possible preconditions first.  The main issue seems to be the case where the
2TB of data get backed-up and then a small 1k file blows away all that work. 
Lets do those 1k files first.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/pg-basebackup-could-not-get-transaction-log-end-position-from-server-FATAL-could-not-open-file-pg-hbd-tp5804225p5804257.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: pg_basebackup: could not get transaction log end position from server: FATAL: could not open file ./pg_hba.conf~: Permission denied

2014-05-16 Thread Heikki Linnakangas

On 05/16/2014 08:11 PM, David G Johnston wrote:

Can we simply backup the non-data parts of $PGDATA first then move onto the
data-parts?  For the files that we'd be dealing with it would be
sufficiently quick to just try and fail, immediately, then check for all
possible preconditions first.  The main issue seems to be the case where the
2TB of data get backed-up and then a small 1k file blows away all that work.
Lets do those 1k files first.


You'll still need to distinguish data and non-data parts somehow. 
One idea would be to backup any files in the top directory first, before 
recursing into the subdirectories. That would've caught the OP's case, 
and probably many other typical cases where you drop something 
unexpected into $PGDATA. You could still have something funny nested 
deep in the data directory, but that's much less common.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers