Re: [GENERAL] Question about "grant create on database" and pg_dump/pg_dumpall

2016-07-04 Thread Haribabu Kommi
On Fri, Jul 1, 2016 at 5:49 AM, David G. Johnston
 wrote:
>
> I have to agree.  At worse this is a documentation bug but I do think we
> have an actual oversight here - although probably not exactly this or the
> linked bug report.
>
> Testing this out a bit on 9.5 Ubuntu 14.04 - I believe the last command,
> , is in error.
>
> <
> create user testuser;
> create database testdb;
> grant create on database testdb to testuser;
>
>
> $ pg_dump -C -s testdb
> [...]
> CREATE DATABASE testdb WITH TEMPLATE = template0 ENCODING = 'UTF8'
> LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
> --!!!
> --NO GRANT STATEMENTS (If we create the DB we should also be instantiating
> the GRANTs, like we do in pg_dumpall)
> --!!!
> REVOKE ALL ON SCHEMA public FROM PUBLIC;
> REVOKE ALL ON SCHEMA public FROM postgres;
> GRANT ALL ON SCHEMA public TO postgres;
> GRANT ALL ON SCHEMA public TO PUBLIC;
> [...]


I also feel that not generating GRANT statements may not be correct.
But from the
other side of the problem, if the grant user is not present in the
system where this
dump is restored may create problems.

Still i feel the GRANT statements should be present, as the create
database statement
is generated only with -C option. So attached patch produces the GRANT
statements based
on the -x option.


Regards,
Hari Babu
Fujitsu Australia


pg_dump_grant_stmt_fix.patch
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] live and dead tuples are zero on slave running on tmpfs

2015-08-19 Thread Haribabu Kommi
On Thu, Aug 20, 2015 at 12:55 AM, Karthik Viswanathan
 wrote:
> Hello,
>
> I have a master slave (read replica) setup running pg 9.4.4. I'm
> trying to find the dead tuples out both the master and slave
>
> Here's what it looks like on master:
>
> # select relname ,n_live_tup ,n_dead_tup from pg_stat_user_tables;
>  relname  | n_live_tup | n_dead_tup
> --++
>  test_52 |4998366 |  0
>  test_v2 |   25182728 |4086591
>  test_1mrows |1000127 |  0
>
> That seems legit because I did an update to ~4million rows just before this.
>
> Here's what it looks on slave though
>
>  #select relname ,n_live_tup ,n_dead_tup from pg_stat_user_tables;
>  relname  | n_live_tup | n_dead_tup
> --++
>  test_52 |  0 |  0
>  test_v2 |  0 |  0
>  test_1mrows |  0 |  0
>
> the postgres data directory on the slave is configured to a tmpfs
> mounted store. Would this cause it to have zero live & dead tuples ?

Autovacuum process doesn't run on slave servers. This is the process
which updates the live and dead tuples. And also pg_stat_tmp files are
not copied from master to slave during backup.

Because of the above reasons, there are no stats available for the tables
in the slave server because of this reason it shows the 0 live and 0 dead
rows. This is expected.

Regards,
Hari Babu
Fujitsu Australia


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about timelines

2015-07-28 Thread Haribabu Kommi
On Wed, Jul 29, 2015 at 3:46 PM, Torsten Förtsch
 wrote:
> Hi,
>
> we have a complex structure of streaming replication (PG 9.3) like:
>
> master --> replica1
>|
>+-> replica2 --> replica21
>|
>+--> replica22 --> replica221
>
> Now I want to retire master and make replica2 the new master:
>
>+--> replica1
>|
>replica2 --> replica21
>|
>+--> replica22 --> replica221
>
> replica2 is currently a synchronous replica.
>
> If I "promote" replica2 a new timeline is created. Hence, I have to
> instruct all other replicas to follow that new timeline
> (recovery_target_timeline = 'latest' in recovery.conf).

PostgreSQL 9.3 supports cascade standby to follow automatically the new master
after the timeline switch. In your case even if the timeline is
changed, you need to start
the standby setup for "replica1" only from scratch. All others follows
automatically
the new master.

Regards,
Hari Babu
Fujitsu Australia


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unicode license compatibility with PostgreSQL license

2015-03-22 Thread Haribabu Kommi
On Fri, Mar 20, 2015 at 5:54 AM, Peter Geoghegan
 wrote:
> On Wed, Mar 18, 2015 at 11:03 PM, Haribabu Kommi
>  wrote:
>> For our next set of development activities in PostgreSQL, we want to
>> use the Unicode organization code with PostgreSQL to open source that
>> feature. Is the Unicode license is compatible with PostgreSQL.
>
> Do you mean that you'd like to add ICU support? I think that would be
> extremely interesting, FWIW. The stability of ICU collations would be
> quite helpful from a number of different perspective. One of which is
> that having a contract about the stability of strxfrm()-style binary
> keys would allow me to make text abbreviated keys exploited in the
> internal pages of B-Tree indexes, to greatly reduce cache misses with
> index scans on text attributes. This general technique already been
> very effective with sorting [1], but it feels likely that we'll need
> ICU to make the abbreviation technique useful for indexes.
>
> [1] 
> http://pgeoghegan.blogspot.com/2015/01/abbreviated-keys-exploiting-locality-to.html

Hi All,

Thanks for the information.
we are just evaluating some conversion algorithms to convert from
UTF32/16 to UTF8 and vice versa.

Regards,
Hari Babu
Fujitsu Australia


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Unicode license compatibility with PostgreSQL license

2015-03-18 Thread Haribabu Kommi
Hi All,

For our next set of development activities in PostgreSQL, we want to
use the Unicode organization code with PostgreSQL to open source that
feature. Is the Unicode license is compatible with PostgreSQL.

The following is the header that is present in one of the Unicode files.

/*
 * Copyright 2001-2004 Unicode, Inc.
 *
 * Disclaimer
 *
 * This source code is provided as is by Unicode, Inc. No claims are
 * made as to fitness for any particular purpose. No warranties of any
 * kind are expressed or implied. The recipient agrees to determine
 * applicability of information provided. If this file has been
 * purchased on magnetic or optical media from Unicode, Inc., the
 * sole remedy for any claim will be exchange of defective media
 * within 90 days of receipt.
 *
 * Limitations on Rights to Redistribute This Code
 *
 * Unicode, Inc. hereby grants the right to freely use the information
 * supplied in this file in the creation of products supporting the
 * Unicode Standard, and to make copies of this file in any form
 * for internal or external distribution as long as this notice
 * remains attached.
 */


Regards,
Hari Babu
Fujitsu Australia


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] HOT standby on windows not working

2014-04-10 Thread Haribabu Kommi
On Fri, Apr 11, 2014 at 8:15 AM, CS_DBA  wrote:
> Hi All;
>
> We're setting up a HOT standby on Windows 2000 server and PostgreSQL 9.2
>
> We do this:
> I've also tried this approach:
>
>
> 1) Master postgresql.conf file
> Modify the following settings:
> listen_address = '*'
> wal_level = hot_standby
> max_wal_senders = 3
>
>
> 2) Modify Master pg_hba.conf file:
> hostssl replication al 192.168.91.136/32 trust
>
> 3) RESTART MASTER DATABASE

Use the pg_basebackup utility to take the backup directory and change
the the conf files.

> 4) Slave postgresql.conf file
> hot_standby = on
>
> 5) Create a recovery.conf file on the slave as follows:
> standby_mode = 'on'
> primary_conninfo = 'host=192.168.91.165'
>
> 6) start the standby database

Try with the above approach.

Regards,
Hari Babu
Fujitsu Australia


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dead rows not getting removed during vacuum

2014-03-20 Thread Haribabu Kommi
On Thu, Mar 20, 2014 at 11:27 PM, Granthana Biswas  wrote:
> Hello All,
>
> Has anyone ever faced the issue of dead rows not getting removed during
> vacuum even if there are no open transactions/connections?
>
> We have been facing this during every scheduled vacuum which is done after
> closing all other database connections:
>
> 119278 dead row versions cannot be removed yet.

These are the dead tuples which occurred after the vacuum operation is
started. These tuples may be visible to the other transactions,
because of this reason It cannot remove these tuples in this vacuum
scan. In the next vacuum these will be cleaned. You can observe the
same in the next vacuum.

Regards,
Hari Babu
Fujitsu Australia


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] replication timeout in pg_basebackup

2014-03-10 Thread Haribabu Kommi
On Tue, Mar 11, 2014 at 7:07 AM, Aggarwal, Ajay  wrote:
> Thanks Hari Babu.
>
> I think what is happening is that my dirty cache builds up quickly for the
> volume where I am backing up. This would trigger flush of these dirty pages
> to the disk. While this flush is going on pg_basebackup tries to do fsync()
> on a received WAL file and gets blocked.

But the sync is executed for every WAL file finish. Does your database
is big in size?
Does your setup is write-heavy operations?

In Linux when it tries to write a bunch of buffers at once, the fysnc
call might block for some time.
In the following link there are some "Tuning Recommendations for
write-heavy operations" which might be useful to you.

http://www.westnet.com/~gsmith/content/linux-pdflush.htm

Any other ideas to handle these kind of problems?

Regards,
Hari Babu
Fujitsu Australia


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] replication timeout in pg_basebackup

2014-03-09 Thread Haribabu Kommi
On Mon, Mar 10, 2014 at 12:52 PM, Aggarwal, Ajay wrote:

>  Our environment: Postgres version 9.2.2 running on CentOS 6.4
>
> Our backups using pg_basebackup are frequently failing with following error
>
> "pg_basebackup: could not send feedback packet: server closed the connection 
> unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request."
>
> We are invoking pg_basebackup with these arguments : pg_basebackup -D 
> backup_dir -X stream -l backup_dir
>
> In postgres logs we see this log message "terminating walsender process
> due to replication timeout".
>
> Our replication timeout is default 60 seconds. If we increase the
> replication time to say 180 seconds, we see better results but backups
> still fail occasionally.
>
> Running strace on pg_basebackup process, we see that the fsync() call
> takes significant time and could be responsible for causing this timeout in
> postgres.
>

Use the pg_test_fsync utility which is available in postgresql contrib
module to test your system sync methods performance.


> Has anybody else run into the same issue? Is there a way to run
> pg_basebackup without fsync() ?
>

As of now there is no such options available, I feel it is better to find
why the sync is taking time?

Regards,
Hari Babu
Fujitsu Australia


Re: [GENERAL] How to continue streaming replication after this error?

2014-02-23 Thread Haribabu Kommi
On Sat, Feb 22, 2014 at 1:21 PM, Torsten Förtsch
wrote:

> On 21/02/14 09:17, Torsten Förtsch wrote:
> > one of our streaming replicas died with
> >
> > 2014-02-21 05:17:10 UTC PANIC:  heap2_redo: unknown op code 32
> > 2014-02-21 05:17:10 UTC CONTEXT:  xlog redo UNKNOWN
> > 2014-02-21 05:17:11 UTC LOG:  startup process (PID 1060) was terminated
> > by signal 6: Aborted
> > 2014-02-21 05:17:11 UTC LOG:  terminating any other active server
> processes
> > 2014-02-21 05:17:11 UTC WARNING:  terminating connection because of
> > crash of another server process
> > 2014-02-21 05:17:11 UTC DETAIL:  The postmaster has commanded this
> > server process to roll back the current transaction and exit, because
> > another server process exited abnormally and possibly corrupted shared
> > memory.
> > 2014-02-21 05:17:11 UTC HINT:  In a moment you should be able to
> > reconnect to the database and repeat your command.
>
> Any idea what that means?
>
> I have got a second replica dying with the same symptoms.


The Xlog record seems to be corrupted. The op code 32
represents XLOG_HEAP2_FREEZE_PAGE, the code exists to handle it.
Don't know why the system is not able to recognize the op code?  Can you
try pg_xlogdump of the corrupted WAL file?

Keep the data folder for problem investigation. As it seems some of kind
corruption, you need to take a fresh base backup to continue.

Regards,
Hari Babu
Fujitsu Australia


Re: [GENERAL] avoiding file system caching of a table

2014-02-17 Thread Haribabu Kommi
On Mon, Feb 17, 2014 at 2:33 PM, Gabriel Sánchez Martínez <
gabrielesanc...@gmail.com> wrote:

> Is there a way of asking PostgreSQL to read the files of a table directly
> off the disk, asking the OS not to use the file cache?  I am running
> PostgreSQL 9.1 on Ubuntu Server 64-bit.  The server in question has the
> maximum amount of RAM it supports, but the database has grown much larger.
>  Most of the time it doesn't matter, because only specific tables or parts
> of indexed tables are queried, and all of that fits in the file cache.  But
> we have a new requirement of queries to a table several times larger than
> the total RAM, and the database has slowed down considerably for the other
> queries.
>
> I am assuming that with every query to the large table, the OS caches the
> files containing the table's data, and since the table is larger than total
> RAM, all the old caches are cleared.  The caches that were useful for other
> smaller tables are lost, and the new caches of the large table are useless
> because on the next query caching will start again from the first files of
> the table.  Please point out if there is a problem with this assumption.
>  Note that I am refering to OS file caching and not PostgreSQL caching.
>
> Is there a way around this?  I have read that there is a way of asking the
> OS not to cache a file when the file is opened.  Is there a way of telling
> PostgreSQL to use this option when reading files that belong a specific
> table?
>
> What about putting the table on a tablespace that is on a different device
> partition with the sync mount option?  Would that help?
>
> All suggestions will be appreciated.
>

Can you please check the following extension, it may be useful to you.
https://github.com/klando/pgfincore

Regards,
Hari Babu
Fujitsu Australia


Re: [GENERAL] File system level backup of shut down standby does not work?

2014-02-17 Thread Haribabu Kommi
On Mon, Feb 17, 2014 at 7:02 PM, Jürgen Fuchsberger <
juergen.fuchsber...@uni-graz.at> wrote:

> Hi all,
>
> I have a master-slave configuration running the master with WAL
> archiving enabled and the slave in recovery mode reading back the WAL
> files from the master ("Log-shipping standby" as described in
> http://www.postgresql.org/docs/9.1/static/warm-standby.html)
>
> I take frequent backups of the standby server:
>
> 1) Stop standby server (fast shutdown).
> 2) Rsync to another fileserver
> 3) Start standby server.
>
> I just tried to recover one of these backups which *failed* with the
> following errors:
>
> 2014-02-17 14:27:28 CET LOG:  incomplete startup packet
> 2014-02-17 14:27:28 CET LOG:  database system was shut down in recovery
> at 2013-12-25 18:00:03 CET
> 2014-02-17 14:27:28 CET LOG:  could not open file
> "pg_xlog/000101E30061" (log file 483, segment 97): No such
> file or directory
> 2014-02-17 14:27:28 CET LOG:  invalid primary checkpoint record
> 2014-02-17 14:27:28 CET LOG:  could not open file
> "pg_xlog/000101E30060" (log file 483, segment 96): No such
> file or directory
> 2014-02-17 14:27:28 CET LOG:  invalid secondary checkpoint record
> 2014-02-17 14:27:28 CET PANIC:  could not locate a valid checkpoint record
> 2014-02-17 14:27:29 CET FATAL:  the database system is starting up
> 2014-02-17 14:27:29 CET FATAL:  the database system is starting up
> 2014-02-17 14:27:30 CET FATAL:  the database system is starting up
> 2014-02-17 14:27:30 CET FATAL:  the database system is starting up
> 2014-02-17 14:27:31 CET FATAL:  the database system is starting up
> 2014-02-17 14:27:31 CET FATAL:  the database system is starting up
> 2014-02-17 14:27:32 CET FATAL:  the database system is starting up
> 2014-02-17 14:27:33 CET FATAL:  the database system is starting up
> 2014-02-17 14:27:33 CET FATAL:  the database system is starting up
> 2014-02-17 14:27:33 CET LOG:  startup process (PID 26186) was terminated
> by signal 6: Aborted
> 2014-02-17 14:27:33 CET LOG:  aborting startup due to startup process
> failure
>
>
> So it seems the server is missing some WAL files which are not
> in the backup? Or is it simply not possible to take a backup of a
> standby server in recovery?
>

>From version 9.2, you can take backups from standby also using
pg_basebackup utility.

Is the WAL file is present in archive folder? if yes, Did you provided the
archive command in
recovery.conf file?

I am not sure what happened? During fast shutdown of standby it should
create a restart point
for further replay of WAL. Can you please enable log_checkpoints guc and
check whether any restart point is
getting created or not during fast shutdown.

Regards,
Hari Babu
Fujitsu Australia


Re: [GENERAL] Toast and slice of toast

2014-02-16 Thread Haribabu Kommi
On Sun, Feb 16, 2014 at 9:38 PM, Rémi Cura  wrote:

> Hey Dear List,
> could somebody point me to some ressources about getting only parts of
> toasted data?
>
> I have a very big custom type and I would like to take blocks of it (like
> byte A to B then byte C to D  then... ).
>
> I found a function in http://doxygen.postgresql.org/tuptoaster_8c.html#called 
> toast_fetch_datum_slice, is it the right way to use it
> (a for loop and calling it several time?).
>

"pg_detoast_datum_slice" is the function which will solve your problem.

Regards,
Hari Babu
Fujitsu Australia


Re: [GENERAL] AutoVacuum Daemon

2013-12-30 Thread Haribabu kommi
On 30 December 2013 19:11 Leonardo M. Ramé wrote:
> Hi, I want know if I should run the auto-vacuum daemon (from
> /etc/init.d/) or it runs automatically and transparently if configured
> in postgres.conf?. If it must be configured manually, what is the
> script to be run, I didn't find pg_autovacuum or similar.
> 
> I didn't find information about this on this page:
> 
> http://www.postgresql.org/docs/8.4/static/routine-
> vacuuming.html#AUTOVACUUM
> 
> P.S.: I'm on linux running PostgreSql 8.4

Just enable "autovacuum" configuration parameter in postgresql.conf file.
Which internally spawns an autovacuum process which will take care of vacuuming.


Regards,
Hari babu.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about forced immediate checkpoints during create database

2013-11-11 Thread Haribabu kommi
On 12 November 2013 07:49 Maxim Boguk wrote:
>Hi everyone,
>
>I have few question about checkpoints during create database.
>
>First just extract from log on my test database 9.2.4:
>
>2013-11-12 03:48:31 MSK 1717 @ from  [vxid: txid:0] [] LOG:  checkpoint 
>starting: immediate force wait
>2013-11-12 03:48:31 MSK 1717 @ from  [vxid: txid:0] [] LOG:  checkpoint 
>complete: wrote 168 buffers (0.0%); 0 transaction log file(s) added, 0 
>removed, 0 recycled; write=0.314 s, sync=0.146 s, total=0.462 s; sync 
>files=104, longest=0.040 s,
>average=0.001 s
>2013-11-12 03:48:32 MSK 1717 @ from  [vxid: txid:0] [] LOG:  checkpoint 
>starting: immediate force wait
>2013-11-12 03:48:32 MSK 1717 @ from  [vxid: txid:0] [] LOG:  checkpoint 
>complete: wrote 6 buffers (0.0%); 0 transaction log file(s) added, 0 removed, 
>0 recycled; write=0.311 s, sync=0.002 s, total=0.315 s; sync files=6, 
>longest=0.000 s,
>average=0.000 s
>2013-11-12 03:48:32 MSK 13609 postgres@hh_data from [local] [vxid:502/0 
>txid:0] [CREATE DATABASE] LOG:  duration: 1160.409 ms  statement: create 
>database _tmp;
>
>So during creating of database two immediate force checkpoints was performed.
>
>Now questions:
>
>1)Why these checkpoints performed at all? I understood why checkpoint 
>performed during drop database (to clean shared buffers from the dropped db 
>data), but why issue checkpoint during create database?
>
>2)Why two checkpoints performed one after one?

Two checkpoints are not performed one after one. One is performed before 
starting the copy. The next checkpoint is performed
before committing. The following are the code comments from the two checkpoints.

First one:
/*
* Force a checkpoint before starting the copy. This will force dirty
* buffers out to disk, to ensure source database is up-to-date on 
disk
* for the copy. FlushDatabaseBuffers() would suffice for that, but 
we
* also want to process any pending unlink requests. Otherwise, if a
* checkpoint happened while we're copying files, a file might be 
deleted
* just when we're about to copy it, causing the lstat() call in 
copydir()
* to fail with ENOENT.
*/

Second one:
/*
* We force a checkpoint before committing.  This 
effectively means
* that committed XLOG_DBASE_CREATE operations will 
never need to be
* replayed (at least not in ordinary crash recovery; we 
still have to
* make the XLOG entry for the benefit of PITR 
operations). This
* avoids two nasty scenarios:
*
* #1: When PITR is off, we don't XLOG the contents of 
newly created
* indexes; therefore the 
drop-and-recreate-whole-directory behavior
* of DBASE_CREATE replay would lose such indexes.
*
* #2: Since we have to recopy the source database 
during DBASE_CREATE
* replay, we run the risk of copying changes in it that 
were
* committed after the original CREATE DATABASE command 
but before the
* system crash that led to the replay.  This is at 
least unexpected
* and at worst could lead to inconsistencies, eg 
duplicate table
* names.
*
* (Both of these were real bugs in releases 8.0 through 
8.0.3.)
*
* In PITR replay, the first of these isn't an issue, 
and the second
* is only a risk if the CREATE DATABASE and subsequent 
template
* database change both occur while a base backup is 
being taken.
* There doesn't seem to be much we can do about that 
except document
* it as a limitation.
*
* Perhaps if we ever implement CREATE DATABASE in a 
less cheesy way,
* we can avoid this.
*/


>3)Is there any good way to perform spread checkpoint during create database 
>(similar to  --checkpoint=spread for the pg_basebackup) ?
>I'm ready to wait 30 min for create database in that case...
>I asking because performing immediate checkpoint on the large heavy loaded 
>database - good recipe for downtime (IO become overloaded to point of the 
>total stall)...
>Is there any workaround for this problem?
>
>4)Is idea to add an option for create/drop database syntax to control 
>checkpoint behaviour sounds reasonable?

Regards,
Hari babu.





Re: [GENERAL] Question About WAL filename and its time stamp

2013-09-05 Thread Haribabu kommi
On 05 September 2013 18:50 ascot.moss wrote:

>From the pg_xlog folder, I found some files with interesting time stamps: 
>older file names with newer timestamps, can you please advise why?

>Set 1: How come 00040F49008D is 10 minutes newer than 
>00040F49008E?
>-rw--- 1 111 115 16777216 Sep  4 15:28 00040F49008C
>-rw--- 1 111 115 16777216 Sep  4 15:27 00040F49008D <===
>-rw--- 1 111 115 16777216 Sep  4 15:17 00040F49008E <
>-rw--- 1 111 115 16777216 Sep  4 15:26 00040F49008F
>-rw--- 1 111 115 16777216 Sep  4 15:27 00040F490090

>Set 2: why files,  00040F4800FD,  00040F4800FE and 
>00040F49, are not reused?
>1) -rw--- 1 postgres postgres 16777216 Sep  4 23:07 
>00040F4800FA
>2) -rw--- 1 postgres postgres 16777216 Sep  4 23:08 
>00040F4800FB
>3) -rw--- 1 postgres postgres 16777216 Sep  4 23:09 
>00040F4800FC  <===
>4) -rw--- 1 postgres postgres 16777216 Sep  4 14:47 
>00040F4800FD  <
>5) -rw--- 1 postgres postgres 16777216 Sep  4 14:46 
>00040F4800FE
>6) -rw--- 1 postgres postgres 16777216 Sep  4 14:46 
>00040F49

In postgres every checkpoint end, it recycle or remove the old xlog files. 
During the recycle process it creates next set of xlog files
Which will be used later by database operations. The files FC, FD, FE and 00 
are recycled files. Now the FC file is in use because of this
Reason the time stamp is different.

Regards,
Hari babu.


Re: [GENERAL] psql: FATAL: the database system is starting up

2013-08-06 Thread Haribabu kommi

On 06 August 2013 16:13 ascot.moss wrote
>Hi,

>I just setup the replication in the slave again, when trying to use psql, I 
>could not get the psql command prompt but got "psql: FATAL:  the database 
>system is starting up" from it.

>PG: 9.2.4

>Below is the log from the the slave: 
>LOG:  database system was shut down in recovery at 2013-08-06 18:34:44
>LOG:  entering standby mode
>LOG:  consistent recovery state reached at 1C/9A0F9CF0
>LOG:  record with zero length at 1C/9A0F9CF0
>LOG:  streaming replication successfully connected to primary
>FATAL:  the database system is starting up

>I am new to PG replication, please help.

There is a configuration parameter "hot_standby" is set as on or not?
This allows the queries during recovery.

Regards,
Hari babu.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general