Re: [GENERAL] Hardware recommendations?

2016-11-02 Thread Scott Marlowe
On Wed, Nov 2, 2016 at 4:19 PM, John R Pierce  wrote:
> On 11/2/2016 3:01 PM, Steve Crawford wrote:
>>
>> After much cogitation I eventually went RAID-less. Why? The only option
>> for hardware RAID was SAS SSDs and given that they are not built on
>> electro-mechanical spinning-rust technology it seemed like the RAID card was
>> just another point of solid-state failure. I combined that with the fact
>> that the RAID card limited me to the relatively slow SAS data-transfer rates
>> that are blown away by what you get with something like an Intel NVME SSD
>> plugged into the PCI bus. Raiding those could be done in software plus $$$
>> for the NVME SSDs but I already have data-redundancy through a combination
>> of regular backups and streaming replication to identically equipped
>> machines which rarely lag the master by more than a second.
>
>
> just track the write wear life remaining on those NVMe cards, and maintain a
> realistic estimate of lifetime remaining in months, so you can budget for
> replacements.   the complication with PCI NVMe is how to manage a
> replacement when the card is nearing EOL.   The best solution is probably
> failing over to a replication slave database, then replacing the worn out
> card on the original server, and bringing it up from scratch as a new slave,
> this can be done with minimal service interruptions.   Note your slaves will
> be getting nearly as many writes as the masters so likely will need
> replacing in the same time frame.

Yeah the last thing you want is to start having all your ssds fail at
once due to write cycle end of life etc. Where I used to work we had
pretty hard working machines with something like 500 to 1000 writes/s
and after a year were at ~90% writes left. ymmv depending on the ssd
etc.

A common trick is to overprovision if possible. Need 100G of storage
for a fast transactional db? Use 10% of a bunch of 800GB drives to
make an array and you now have a BUNCH of spare write cycles per
device for extra long life.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication (BDR) problem: won't catch up after connection timeout

2016-11-02 Thread Craig Ringer
See also https://github.com/2ndQuadrant/bdr/issues/233


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication (BDR) problem: won't catch up after connection timeout

2016-11-02 Thread Craig Ringer
Increase wal_sender_timeout to resolve the issue.

I've been investigating just this issue recently. See
https://www.postgresql.org/message-id/camsr+ye2dsfhvr7iev1gspzihitwx-pmkd9qalegctya+sd...@mail.gmail.com
.

It would be very useful to me to know more about the transaction that
caused this problem.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] libpq backwards compatbility

2016-11-02 Thread Andy Halsall
We have a libpq application written in C++. There are existing running 
deployments of our application that were compiled against PostgreSQL version 
9.3.


We want to move to PostgreSQL version 9.6.


Can we assume that the 9.6 libpq library is backwards compatible with 
applications compiled against 9.3 headers? I wouldn't expect to have to 
re-compile our application against 9.6 libpq headers and redeploy because we're 
not taking advantage of any new features and nothing seems to have been 
deprecated.


The release notes talk about additions to libpq in section E.2.3.9 "Client 
Interfaces". I'd expect any un-changed features to be backwards compatible. 
Limited testing suggests this is so but I can't find a clear statement.


Could somebody please advise?

Thanks

Andy



Re: [GENERAL] Recover from corrupted database due to failing disk

2016-11-02 Thread Jim Nasby

On 11/2/16 6:21 PM, Jim Nasby wrote:

I wouldn't trust the existing cluster that far. Since it sounds like you
have no better options, you could use zero_damaged_pages to allow a
pg_dumpall to complete, but you're going to end up with missing data. So
what I'd suggest would be:

stop Postgres
make a copy of the cluster
start with zero_damaged_pages
pg_dumpall
stop and remove the cluster (make sure you've got that backup)
create a new cluster and load the dump


Oh, and while you're at it, upgrade to a version that's supported. 8.1 
has been out of support for 5+ years.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recover from corrupted database due to failing disk

2016-11-02 Thread Jim Nasby

On 11/2/16 2:02 PM, Gionatan Danti wrote:

However, backup continue to fail with "invalid page header in block"
message. Morever, I am very near the xid wraparound limit and, as vacuum
fails due to the invalid blocks, I expect a database shutdown (triggered
by the 1M transaction protection) within some days.


That means at least some of the Postgres files have been damaged 
(possibly due to the failing disk). Postgres will complain when it sees 
internal data structures that don't make sense, but it has no way to 
know if any of the user data has been screwed up.



From my understanding, both problem *should* be solved enabling
"zero_damaged_pages" and executing a "vacuumdb -a". Is this expectation
correct? Will a "reindexdb -a" necessary?


I wouldn't trust the existing cluster that far. Since it sounds like you 
have no better options, you could use zero_damaged_pages to allow a 
pg_dumpall to complete, but you're going to end up with missing data. So 
what I'd suggest would be:


stop Postgres
make a copy of the cluster
start with zero_damaged_pages
pg_dumpall
stop and remove the cluster (make sure you've got that backup)
create a new cluster and load the dump
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions on Post Setup MASTER and STANDBY replication - Postgres9.1

2016-11-02 Thread Jim Nasby

On 11/2/16 2:49 PM, Joanna Xu wrote:

The replication is verified and works.  My questions are what’s the
reason causing “cp: cannot stat
`/opt/postgres/9.1/archive/00010003': No such file or
directory” on STANDBY and how to fix it?


What instructions/tools did you use to setup replication?


Also, it seems the startup
process stucks on “recovering 00010004”, how to resolve
it?


As far as I know that's normal while in streaming mode.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] initdb createuser commands

2016-11-02 Thread Jim Nasby

On 10/31/16 9:50 AM, Christofer C. Bell wrote:

He's getting a lot of pushback that really feels it's coming from the
wrong direction.  "Just learn it."  "It's always been this way."  "No
one agrees with you."  These arguments are unconvincing.  That said,
there's nothing wrong with just saying, "we're not going to change it
because we don't want to."


The community often does a horrible job of viewing things through the 
eyes of a new user. This is why mysql became so popular for a while. 
Comments like "just learn it" are unproductive and push new users away.


And we wonder why we're having trouble attracting new developers...

This has actually been discussed recently on -hackers as well[1], and 
there is some general consensus that simplification in this area would 
be a good idea.


1: 
https://www.postgresql.org/message-id/20160826202911.GA320593@alvherre.pgsql

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Checking Postgres Streaming replication delay

2016-11-02 Thread Jim Nasby

On 10/31/16 3:39 PM, Patrick B wrote:

|(
||extract(epoch FROMnow())-
||extract(epoch FROMpg_last_xact_replay_timestamp())
||)::int lag|


You could certainly simplify it though...

extract(epoch FROM now()-pg_last_xact_replay_timestamp())
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hardware recommendations?

2016-11-02 Thread John R Pierce

On 11/2/2016 3:01 PM, Steve Crawford wrote:
After much cogitation I eventually went RAID-less. Why? The only 
option for hardware RAID was SAS SSDs and given that they are not 
built on electro-mechanical spinning-rust technology it seemed like 
the RAID card was just another point of solid-state failure. I 
combined that with the fact that the RAID card limited me to the 
relatively slow SAS data-transfer rates that are blown away by what 
you get with something like an Intel NVME SSD plugged into the PCI 
bus. Raiding those could be done in software plus $$$ for the NVME 
SSDs but I already have data-redundancy through a combination of 
regular backups and streaming replication to identically equipped 
machines which rarely lag the master by more than a second.


just track the write wear life remaining on those NVMe cards, and 
maintain a realistic estimate of lifetime remaining in months, so you 
can budget for replacements.   the complication with PCI NVMe is how to 
manage a replacement when the card is nearing EOL.   The best solution 
is probably failing over to a replication slave database, then replacing 
the worn out card on the original server, and bringing it up from 
scratch as a new slave, this can be done with minimal service 
interruptions.   Note your slaves will be getting nearly as many writes 
as the masters so likely will need replacing in the same time frame.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hardware recommendations?

2016-11-02 Thread Steve Crawford
After much cogitation I eventually went RAID-less. Why? The only option for
hardware RAID was SAS SSDs and given that they are not built on
electro-mechanical spinning-rust technology it seemed like the RAID card
was just another point of solid-state failure. I combined that with the
fact that the RAID card limited me to the relatively slow SAS data-transfer
rates that are blown away by what you get with something like an Intel NVME
SSD plugged into the PCI bus. Raiding those could be done in software plus
$$$ for the NVME SSDs but I already have data-redundancy through a
combination of regular backups and streaming replication to identically
equipped machines which rarely lag the master by more than a second.

Cheers,
Steve






On Wed, Nov 2, 2016 at 1:20 PM, Scott Marlowe 
wrote:

> On Wed, Nov 2, 2016 at 11:40 AM, Joshua D. Drake 
> wrote:
> > On 11/02/2016 10:03 AM, Steve Atkins wrote:
> >>
> >> I'm looking for generic advice on hardware to use for "mid-sized"
> >> postgresql servers, $5k or a bit more.
> >>
> >> There are several good documents from the 9.0 era, but hardware has
> moved
> >> on since then, particularly with changes in SSD pricing.
> >>
> >> Has anyone seen a more recent discussion of what someone might want for
> >> PostreSQL in 2017?
> >
> >
> > The rules haven't changed much, more cores (even if a bit slower) is
> better
> > than less, as much ram as the budget will allow and:
> >
> > SSD
> >
> > But make sure you get datacenter/enterprise SSDs. Consider that even a
> slow
> > datacenter/enterprise SSD can do 500MB/s random write and read just as
> fast
> > if not faster. That means for most installations, a RAID1 is more than
> > enough.
>
> Just to add that many setups utilizing SSDs are as fast or faster
> using kernel level RAID as they are with a hardware RAID controller,
> esp if the RAID controller has caching enabled. We went from 3k to 5k
> tps to 15 to 18k tps by turnong off caching on modern LSI MegaRAID
> controllers running RAID5.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] Google Cloud Compute

2016-11-02 Thread Maeldron T.

Hello,

I’m considering moving my servers to Google. The main reason is the 
transparent encryption they offer. This means I should either move all 
or none. The former would include PostreSQL, specifially: FreeBSD + ZFS 
+ PostgreSQL.


Do you have any pros or cons based on experience? (Would fsync work as 
it is supposed to work?)


Yesterday I run a few "benchmarks", in case loading dumps and running 
vacuum qualifies, and there was little difference between the peristent 
SSD and the local SSD storage. Both were set up as RAID 1 (ZFS mirror).


Later I managed to read the documentation to learn the persistent SSD is 
already redundant hence I can’t justify the price and the limitations of 
the local SSD storage for my current needs. However, the persistent 
storage is network-based.


May I use it and sleep well? (If the night is silent otherwise.)

Are the so-called migration events ever happen?

M




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hardware recommendations?

2016-11-02 Thread Scott Marlowe
On Wed, Nov 2, 2016 at 11:40 AM, Joshua D. Drake  wrote:
> On 11/02/2016 10:03 AM, Steve Atkins wrote:
>>
>> I'm looking for generic advice on hardware to use for "mid-sized"
>> postgresql servers, $5k or a bit more.
>>
>> There are several good documents from the 9.0 era, but hardware has moved
>> on since then, particularly with changes in SSD pricing.
>>
>> Has anyone seen a more recent discussion of what someone might want for
>> PostreSQL in 2017?
>
>
> The rules haven't changed much, more cores (even if a bit slower) is better
> than less, as much ram as the budget will allow and:
>
> SSD
>
> But make sure you get datacenter/enterprise SSDs. Consider that even a slow
> datacenter/enterprise SSD can do 500MB/s random write and read just as fast
> if not faster. That means for most installations, a RAID1 is more than
> enough.

Just to add that many setups utilizing SSDs are as fast or faster
using kernel level RAID as they are with a hardware RAID controller,
esp if the RAID controller has caching enabled. We went from 3k to 5k
tps to 15 to 18k tps by turnong off caching on modern LSI MegaRAID
controllers running RAID5.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Questions on Post Setup MASTER and STANDBY replication - Postgres9.1

2016-11-02 Thread Joanna Xu
Hi All,

After setting up two nodes with MASTER and STANDBY replication, I see " cp: 
cannot stat `/opt/postgres/9.1/archive/00010003': No such file 
or directory" in the log on STANDBY and the startup process recovering 
"00010004" which does not exist in the archive directory.

The replication is verified and works.  My questions are what's the reason 
causing "cp: cannot stat `/opt/postgres/9.1/archive/00010003': 
No such file or directory" on STANDBY and how to fix it? Also, it seems the 
startup process stucks on "recovering 00010004", how to resolve 
it?

Thank you !

On STANDBY node:

LOG:  entering standby mode
cp: cannot stat `/opt/postgres/9.1/archive/00010003': No such 
file or directory
LOG:  redo starts at 0/320
LOG:  record with zero length at 0/3B0
cp: cannot stat `/opt/postgres/9.1/archive/00010003': No such 
file or directory
LOG:  streaming replication successfully connected to primary
LOG:  consistent recovery state reached at 0/400
LOG:  database system is ready to accept read only connections

ls -rlt /opt/postgres/9.1/archive
-rw--- 1 postgres postgres 16777216 Oct 28 14:07 00010001
-rw--- 1 postgres postgres 16777216 Nov  2 19:00 00010002

ps -ef|grep startup|grep -v grep
postgres  9036  9020  0 19:00 ?00:00:00 postgres: startup process   
recovering 00010004

ps -ef|grep receiver|grep -v grep
postgres  9040  9020  0 19:00 ?00:00:00 postgres: wal receiver process  
 streaming 0/4000380

On MASTER node:

ls -rlt /opt/postgres/9.1/archive
-rw--- 1 postgres postgres 16777216 Oct 28 14:08 00010001
-rw--- 1 postgres postgres 16777216 Nov  2 19:00 00010002
-rw--- 1 postgres postgres 16777216 Nov  2 19:00 00010003
-rw--- 1 postgres postgres  270 Nov  2 19:00 
00010003.0020.backup

ps -ef|grep archiver |grep -v grep
postgres  9041  9035  0 18:57 ?00:00:00 postgres: archiver process   
last was 00010003.0020.backup

ps -ef|grep sender |grep -v grep
postgres  9264  9035  0 19:00 ?00:00:00 postgres: wal sender process 
postgres 192.168.154.106(64182) streaming 0/4000380

Cheers,

Joanna Xu
Senior Oracle DBA
Data Experience Solution BU

+1 613 595 5234

AMDOCS | EMBRACE CHALLENGE EXPERIENCE SUCCESS

POLICY CONTROL IN THE FAST LANE
What's making policy control strategic in 2015 and beyond? Check out the top 
ten factors driving change...


This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp


Re: [GENERAL] Hardware recommendations?

2016-11-02 Thread Joshua D. Drake

On 11/02/2016 10:03 AM, Steve Atkins wrote:

I'm looking for generic advice on hardware to use for "mid-sized" postgresql 
servers, $5k or a bit more.

There are several good documents from the 9.0 era, but hardware has moved on 
since then, particularly with changes in SSD pricing.

Has anyone seen a more recent discussion of what someone might want for 
PostreSQL in 2017?


The rules haven't changed much, more cores (even if a bit slower) is 
better than less, as much ram as the budget will allow and:


SSD

But make sure you get datacenter/enterprise SSDs. Consider that even a 
slow datacenter/enterprise SSD can do 500MB/s random write and read just 
as fast if not faster. That means for most installations, a RAID1 is 
more than enough.


JD


--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Hardware recommendations?

2016-11-02 Thread Steve Atkins
I'm looking for generic advice on hardware to use for "mid-sized" postgresql 
servers, $5k or a bit more.

There are several good documents from the 9.0 era, but hardware has moved on 
since then, particularly with changes in SSD pricing.

Has anyone seen a more recent discussion of what someone might want for 
PostreSQL in 2017?

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Replication (BDR) problem: won't catch up after connection timeout

2016-11-02 Thread Suomela Tero
Hi there,

We have some problems with BDR and would appreciate any hints and advice with 
it. Here's the short story:

We are testing BDR with PostgreSQL 9.4 and it seems to work quite ok after 
getting it up and running, but we ran into a quite disturbing weakness also. A 
basic two node cluster breaks simply by making a transaction that takes a bit 
long to process. Here's an example:

On node1 the application is processing some data which causes around 100k 
select & insert statements within one transaction. This takes some time and the 
replication says "timeout" (assumably there is a keep-alive mechanism which 
simply doesn't work while the transaction is processed). After the transaction 
is committed on node1, the log on node1 says:

< 2016-11-02 13:06:29.117 EET >LOG:  terminating walsender process due to 
replication timeout
< 2016-11-02 13:06:34.168 EET >LOG:  starting logical decoding for slot 
"bdr_64344_6300833630798326204_1_77037__"
< 2016-11-02 13:06:34.168 EET >DETAIL:  streaming transactions committing after 
0/117FCE38, reading WAL from 0/117FCE38
< 2016-11-02 13:06:34.172 EET >LOG:  logical decoding found consistent point at 
0/117FCE38
< 2016-11-02 13:06:34.172 EET >DETAIL:  There are no running transactions.
< 2016-11-02 13:09:09.196 EET >ERROR:  data stream ended
< 2016-11-02 13:09:09.206 EET >LOG:  worker process: bdr 
(6300843528307178977,1,64344,)->bdr (6300833630798326204,1, (PID 28195) exited 
with exit code 1
< 2016-11-02 13:09:14.209 EET >LOG:  starting background worker process "bdr 
(6300843528307178977,1,64344,)->bdr (6300833630798326204,1,"
< 2016-11-02 13:09:14.217 EET >NOTICE:  version "1.0" of extension "btree_gist" 
is already installed
< 2016-11-02 13:09:14.219 EET >NOTICE:  version "1.0.1.0" of extension "bdr" is 
already installed
< 2016-11-02 13:09:14.241 EET >INFO:  starting up replication from 5 at 
0/D038EC8

Checking the BDR status:

< 2016-11-02 13:09:29.632 EET >LOG:  statement: SELECT node_name, node_status 
FROM bdr.bdr_nodes;
< 2016-11-02 13:09:29.633 EET >LOG:  statement: SELECT conn_sysid, conn_dboid, 
conn_dsn FROM bdr.bdr_connections;
< 2016-11-02 13:09:29.633 EET >LOG:  statement: SELECT slot_name, database, 
active, pg_xlog_location_diff(pg_current_xlog_insert_location(), restart_lsn) 
AS retained_bytes FROM pg_replication_slots WHERE plugin = 'bdr';
< 2016-11-02 13:09:29.635 EET >LOG:  statement: SELECT pid, application_name, 
client_addr, state, pg_xlog_location_diff(pg_current_xlog_insert_location(), 
flush_location) AS lag_bytes FROM pg_stat_replication;

Result:

node_name | node_status
---+-
node1 | r
node2 | r
(2 rows)

 conn_sysid  | conn_dboid |conn_dsn
-++-
6300843528307178977 |  64344 | host=192.168.150.11 port=5432 dbname=test 
user=test password=test
6300833630798326204 |  77037 | host=192.168.150.12 port=5432 dbname=test 
user=test password=test
(2 rows)

slot_name| database  | active | retained_bytes
-+---++
bdr_64344_6300833630798326204_1_77037__ | test  | t  |   64939984
(1 row)

  pid  |  application_name  |  client_addr   |  state  
| lag_bytes
---+++-+---
28825 | bdr (6300833630798326204,1,77037,):receive | 192.168.150.12 | catchup | 
 64939984
(1 row)

The node2 state is 'catchup' with lots to catch up, but nothing happens, it 
stays like this even that the connection looks ok. So the data is not 
replicated anymore.

Then if we restart node2, node1 log starts saying:

< 2016-11-02 13:11:06.656 EET >ERROR:  replication slot 
"bdr_64344_6300833630798326204_1_77037__" is already active for pid 28517

Then if we restart both nodes (requires kill -9 for the wal sender process, 
otherwise it won't stop), node1 log:

< 2016-11-02 13:17:01.318 EET >LOG:  shutting down
< 2016-11-02 13:17:01.343 EET >LOG:  database system is shut down
< 2016-11-02 13:18:03.288 EET >LOG:  server process (PID 28682) was terminated 
by signal 9: Killed
< 2016-11-02 13:18:03.288 EET >LOG:  terminating any other active server 
processes
< 2016-11-02 13:18:03.288 EET >LOG:  abnormal database system shutdown
< 2016-11-02 13:18:25.067 EET >LOG:  database system was shut down at 
2016-11-02 13:17:01 EET
< 2016-11-02 13:18:25.087 EET >LOG:  starting up replication identifier with 
ckpt at 0/155EB520
< 2016-11-02 13:18:25.087 EET >LOG:  recovered replication state of node 1 to 
0/1BC8620
< 2016-11-02 13:18:25.087 EET >LOG:  recovered replication state of node 2 to 
0/1E932C8
< 2016-11-02 13:18:25.087 EET >LOG:  recovered replication state of node 3 to 
0/252FBB8
< 2016-11-02 13:18:25.087 EET >LOG:  recovered replication state of node 4 to 
0/294BA20
< 2016-11-02