Re: [GENERAL] pg_receivexlog 9.2 client working with 9.1 server?

2013-06-25 Thread Dan Birken
Update: I have successfully used this configuration with a month's worth of
WALs (tens of thousands), run a test restore, and everything appears to
have worked as expected.  So at least based on that test, this
configuration seems fine.

-Dan


On Fri, May 24, 2013 at 4:42 PM, Dan Birken bir...@gmail.com wrote:

 I am running a pg_receivexlog 9.2 client against a 9.1 server.  It seems
 to work.  Comments in the code seem to indicate that this setup is workable.

 However, pg_receivexlog is not part of the 9.1 source tree nor the rpm
 package (as far as I see), so I am curious if there is some caveat that I
 sure be aware of, or some other incompatibility that makes this setup not
 work.

 The goal is to have PITR using a combination of pg_basebackup (which is
 part of 9.1) and pg_receivexlog.

 Thanks,
 Dan



[GENERAL] pg_receivexlog 9.2 client working with 9.1 server?

2013-05-24 Thread Dan Birken
I am running a pg_receivexlog 9.2 client against a 9.1 server.  It seems to
work.  Comments in the code seem to indicate that this setup is workable.

However, pg_receivexlog is not part of the 9.1 source tree nor the rpm
package (as far as I see), so I am curious if there is some caveat that I
sure be aware of, or some other incompatibility that makes this setup not
work.

The goal is to have PITR using a combination of pg_basebackup (which is
part of 9.1) and pg_receivexlog.

Thanks,
Dan


Re: [GENERAL] pg_dump on Hot standby : clarification on how to

2011-05-13 Thread Dan Birken
What is your max_standby_streaming_delay set at?

If your pg_dump takes longer than your max_standby_streaming_delay (which
is likely since the default is 30s), you might get that error as well.  This
setting tells your standby how long it should wait to apply conflicting WAL
files to finish a particular transaction.

-Dan

On Fri, May 13, 2011 at 11:28 AM, bubba postgres
bubba.postg...@gmail.comwrote:

 What I mean is if I do pg_dump on slave I get the  ERROR: canceling
 statement due to conflict with recovery.
 So I googled and tried the solution listed in the linked thread.
 I did a start transaction via psql on the master but I continued to get
 the error.
 Wondered if there was more to it than that.





 On Thu, May 12, 2011 at 5:08 PM, Andrew Sullivan a...@crankycanuck.cawrote:

 On Thu, May 12, 2011 at 11:26:38AM -0700, bubba postgres wrote:
  I would just like to get some clarification from the list on how to do a
  pg_dump on the slave in the face of canceling statement due to conflict
  with recovery.
  The following links seem to indicate that If I start an idle transaction
 on
  the master I should be able to do the pg_dump, but I tried this in psql
 on
  the master start transaction, and was still unable to do a pg_dump on
 the
  slave at the same time.
  Is there something special about using dblink that would make this all
 work?

 Could you define what you mean by unable to do pg_dump on the slave?

 I don't see why dblink would be the special thing.  I think what you
 want is to hold a transaction open on the master so that the WAL can't
 get recycled.  At least, that's what I understood from the post.  I
 haven't actually tried it yet, but to me it sounded like it ought to
 work.

 A

 --
 Andrew Sullivan
 a...@crankycanuck.ca

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general





Re: [GENERAL] PostgreSQL 9.0 Streaming Replication Configuration

2011-02-08 Thread Dan Birken
If the standby server cannot pull the WAL file from the master using
streaming replication, then it will attempt to pull it from the archive.  If
the WAL segment isn't archived (for example because you aren't using
archiving), then your streaming replication is unrecoverable and you have to
take a fresh backup from the master and transfer it over to the standby
machine to start replication again.  So the value of having archiving setup
is that in case a standby falls way behind, then the standby can recover
without having to copy your database over to the standby machine again.

Another setting you can tweak is wal_keep_segments on the master machine,
which is the minimum numbers of WAL segments it will keep without deleting.
 So just with some simple math: (wal_keep_segments * 16MB /
your_wal_write_rate) you can determine a ballpark of how long your standby
machines can fall behind while still being able to recover without
archiving.

-Dan

On Tue, Feb 8, 2011 at 6:51 PM, Ogden li...@darkstatic.com wrote:


 On Feb 8, 2011, at 8:47 PM, Ray Stell wrote:

 
  pg_controldata command is helpful.
 
  Archiving wal not required, but you can roll it either way.
 
 

 That is my confusion - Archiving wal does not conflict in any way with
 streaming replication? What if streaming replication lags behind (especially
 with a lot of connections).

 Thank you

 Ogden
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Understanding PG9.0 streaming replication feature

2011-01-26 Thread Dan Birken
(I am not the OP, but recently went through the same thing so I'll chime in)

Reading through the documentation now (albeit with a now pretty good
understanding of how everything works), I think the main confusing thing is
how different bits which apply to file-base log shipping, streaming
replication and both of them are thrown together on this
pagehttp://developer.postgresql.org/pgdocs/postgres/warm-standby.html,
making it difficult to figure out what you need to know if you are just
looking to implement streaming replication.

For example, in the introduction section:

Directly moving WAL records from one database server to another is typically
described as log shipping. PostgreSQL implements file-based log shipping,
which means that WAL records are transferred one file (WAL segment) at a
time. WAL files (16MB) can be shipped easily and cheaply over any distance,
whether it be to an adjacent system, another system at the same site, or
another system on the far side of the globe. The bandwidth required for this
technique varies according to the transaction rate of the primary
server. Record-based
log shipping is also possible with streaming replication (see Section
25.2.5http://developer.postgresql.org/pgdocs/postgres/warm-standby.html#STREAMING-REPLICATION
).

It should be noted that the log shipping is asynchronous, i.e., the WAL
records are shipped after transaction commit. As a result, there is a window
for data loss should the primary server suffer a catastrophic failure;
transactions not yet shipped will be lost. The size of the data loss window
in file-based log shipping can be limited by use of the
archive_timeout parameter,
which can be set as low as a few seconds. However such a low setting will
substantially increase the bandwidth required for file shipping. If you need
a window of less than a minute or so, consider using streaming replication
(see Section 
25.2.5http://developer.postgresql.org/pgdocs/postgres/warm-standby.html#STREAMING-REPLICATION
).

I colored things that apply to both in purple, that apply just to file-based
log shipping in red, and that just apply to streaming replication in green.
 So if you are reading through this for the first time looking for
information on streaming replication, it is very difficult to figure out
some key points (it works by log-shipping, it is asynchronous), while
avoiding stuff that you don't need to worry about (archive_timeout, WAL
files are transferred one at a time, etc).

I doubt I am the first person that is using postgres replication for the
first time because of hot standbys and streaming replication, and I think
the document is very poor for dealing with those people.  Just looking at
the coloring above, it looks very clearly like the document was written for
file-based log shipping and then details about streaming replication are
just appended at the end.

The great thing about the wiki
pagehttp://wiki.postgresql.org/wiki/Streaming_Replication (which
I am assuming is the doc OP is referring to positively) is that it only
includes details about streaming replication, thus you don't have to
constantly be dodging information that doesn't apply to you.
-Dan


On Wed, Jan 26, 2011 at 7:04 AM, Bruce Momjian br...@momjian.us wrote:

 Ben Carbery wrote:
  Thanks for the responses all, I have this working now. I had to create a
  base backup before copying to the standby for replication to start, but
 the
  main sticking point was actually understanding the terms and concepts
  involved..
 
  I think the Binary Replication Tutorial page on the wiki basically
 explains
  everything. Unfortunately the actual pg manual is still about as clear as
  mud even though I now have a vague idea of how this all works. I think
 this
  is worth mentioning given the majority of the pg manual is actually of an
  unusually high standard - probably among the best technical manuals I
 have
  read in terms of being both comprehensive and concise, so it's a shame
 that
  this section doesn't meet that standard (IMO). Hopefully this will get a
  rewrite at some point!

 Can you give some concrete suggestions on what needs to be added?  The
 current documentation is here:

http://developer.postgresql.org/pgdocs/postgres/index.html

 --
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Question about concurrent synchronous and asynchronous commits

2011-01-13 Thread Dan Birken
Ok given your response, this is my understanding of how the WAL works:

When you begin a transaction, all your changes write to the in-memory WAL
buffer, and that buffer flushes to disk when:
a) Somebody commits a synchronous transaction
b) The WAL buffer runs out of space

Please correct me if I'm wrong.

-Dan

On Wed, Jan 12, 2011 at 12:32 PM, Vick Khera vi...@khera.org wrote:

 On Wed, Jan 12, 2011 at 12:03 AM, Dan Birken bir...@gmail.com wrote:
  If I commit asynchronously and then follow that with
 a synchronous commit,
  does that flush the asynchronous commit as well?

 I'm pretty sure it does, because it has to flush the write-ahead log
 to disk, and there's only one.  You can think of it as getting the
 flush for free from the first transaction, since the single flush
 covered the requirements of both transactions.

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] Question about concurrent synchronous and asynchronous commits

2011-01-11 Thread Dan Birken
I notice on the documentation page about Asynchronous Commit (
http://www.postgresql.org/docs/8.3/static/wal-async-commit.html*)*, it says
the follow The user can select the commit mode of each transaction, so that
it is possible to have both synchronous and asynchronous commit transactions
running concurrently.  Now out of curiously, I have a couple questions
about the details of this.

If I commit asynchronously and then follow that with a synchronous commit,
does that flush the asynchronous commit as well?

Is the WAL strictly linear in that commit order must always replay back in
the order that transactions return on the server, regardless of whether they
are asynchronous or synchronous?

Thanks,
Dan