Receivgin error while altering the table column datatype

2019-09-25 Thread Raghavendra Rao J S V
Hi All,

I am receiving below error while altering the table column datatype.



*Reason:SQL Error [0A000]: ERROR: cannot alter type of a column used in a
trigger definition  Detail: trigger extensiontrigger on table tele depends
on column "phonenumber"*

Do I need to drop the trigger and alter the table column to modify the
datatype of the columns. Am I correct? or is there any other way to resolve
it.

-- 
Regards,
Raghavendra Rao J S V


Error while using pg_dump

2019-04-16 Thread Raghavendra Rao J S V
Hi All,

We are facing below error while taking the backup of the database using
pg_dump. Could you suggest me how to rectify this?


pg_dump: [archiver (db)] query was: COPY public.aclappliedtopep (id,
instance_version, direction, aclname, ifname, owningentityid,
protocolendpoint_id, deploypending, authentityid, authentityclass,
accesscontrollist_id) TO stdout;

dollar qustion is 1

dbuser name is qovr , dbname is qovr and direcotry is
/opt/tempbackups/HCS-SYD3-PCA01-weekly-backup/backup/appcomponent/qovr-db/emms_db_09_05_17_18_46_26.dump


dollar qustion is 0

pg_dump: FATAL:  terminating connection due to administrator command

pg_dump: [archiver (db)] query failed: server closed the connection
unexpectedly

This probably means the server terminated abnormally

before or while processing the request.

pg_dump: [archiver (db)] query was: COPY public.aclappliedtopep (id,
instance_version, direction, aclname, ifname, owningentityid,
protocolendpoint_id, deploypending, authentityid, authentityclass,
accesscontrollist_id) TO stdout;

pg_dump: [archiver (db)] connection to database "qovr" failed: could not
connect to server: Connection refused

Is the server running on host "localhost" (127.0.0.1) and
accepting

TCP/IP connections on port 5433?

pg_dump: error reading large object 69417845: FATAL:  terminating
connection due to administrator command

FATAL:  terminating connection due to administrator command

pg_dump: could not open large object 59087743: FATAL:  terminating
connection due to administrator command

FATAL:  terminating connection due to administrator command

-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Re: Getting error while running the pg_basebackup through PGBOUNCER

2019-04-08 Thread Raghavendra Rao J S V
Thank you very much for your prompt response.

Could you explain other admin type operations, which are not supported by
pgbouncer?

Regards,
Raghavendra Rao.

On Mon, 8 Apr 2019 at 19:16, Scot Kreienkamp 
wrote:

> Replication and several other admin type operations must connect directly
> to PG.  They are not supported through PGBouncer.
>
>
>
> *From:* Raghavendra Rao J S V [mailto:raghavendra...@gmail.com]
> *Sent:* Monday, April 8, 2019 9:21 AM
> *To:* pgsql-general@lists.postgresql.org
> *Subject:* Getting error while running the pg_basebackup through PGBOUNCER
>
>
>
>
> *ATTENTION:   This email was sent to La-Z-Boy from an external source.
> Be vigilant when opening attachments or clicking links.*
>
> Hi All,
>
>
>
> We are using PGBOUNCER(connection pool mechanisam). PGBOUNCER uses port
> 5433.
>
>
>
> Postgres database port number is 6433. By using port 5433 PGBOUNCER is
> connecting to postgres port 6433 database.
>
>
>
> Now PGBOUNCER is establishing the connections properly but when I try to
> run the pg_basebackup through port 5433(PGBOUNCER port) we are receiving
> below error. Please guide me.
>
>
>
>
>
>  /opt/postgres/9.2/bin/pg_basebackup -p 5433 -U postgres -P -v -x
> --format=tar --gzip --compress=1 --pgdata=- -D /opt/rao
>
>
>
> *pg_basebackup: could not connect to server: ERROR:  Unsupported startup
> parameter: replication*
>
>
>
>
>
> --
>
> Regards,
> Raghavendra Rao J S V
>


-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Getting error while running the pg_basebackup through PGBOUNCER

2019-04-08 Thread Raghavendra Rao J S V
Hi All,

We are using PGBOUNCER(connection pool mechanisam). PGBOUNCER uses port
5433.

Postgres database port number is 6433. By using port 5433 PGBOUNCER is
connecting to postgres port 6433 database.

Now PGBOUNCER is establishing the connections properly but when I try to
run the pg_basebackup through port 5433(PGBOUNCER port) we are receiving
below error. Please guide me.


 /opt/postgres/9.2/bin/pg_basebackup -p 5433 -U postgres -P -v -x
--format=tar --gzip --compress=1 --pgdata=- -D /opt/rao

*pg_basebackup: could not connect to server: ERROR:  Unsupported startup
parameter: replication*


-- 
Regards,
Raghavendra Rao J S V


How duplicate values inserted into the primary key column of table and how to fix it

2019-01-24 Thread Raghavendra Rao J S V
Hi All,


We are using Postgresql 9.2 database.



In one of the transactional table, I have observed duplicate values for the
primary key columns.



Please guide me how is it possible and how to fix this kind of issue.

-- 
Regards,
Raghavendra Rao J S V


Need a command to take the backup of the child tables along with its master table.

2019-01-24 Thread Raghavendra Rao J S V
Hi All,

We have one master table and multiple child tables (dependent tables)
associated with it.



While taking the backup of the master table , I would like to take the
backup of all its child (dependent) tables backup also.



Please guide me how to take the backup of the  master table and its
dependent tables using *pg_dump *command.

-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Does idle sessions will consume more cpu and ram? If yes,how to control them

2018-12-17 Thread Raghavendra Rao J S V
In my application, the idle sessions are consuming cpu and ram. refer the
ps command output.

How idle session will consume more ram/cpu?

How to control it?

We are using Postgresql 9.2 with Centos 6 os. Please guide me.

[image: image.png]

-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


pg_dump backup utility is taking more time around 24hrs to take the backup of 28GB

2018-10-18 Thread Raghavendra Rao J S V
Hi All,

We are using *pg_dump *backup utility in order to take the backup of the
database. Unfortunately,it is taking around 24hrs of time  to take the
backup of  28GB database. Please guide me how to reduce the time and is
there any parameter need to be modified which will help us to reduce the
backup time. We are using Postgres 9.2 version

*Note:-*Kindly suggest me options using pg_dump only.

-- 
Regards,
Raghavendra Rao


Re: Is there any impact if "#wal_keep_segments = 0 " and "checkpoint_segments = 128" postgresql.conf file.

2018-10-15 Thread Raghavendra Rao J S V
Thanks a lot.

On Mon, 15 Oct 2018 at 14:43, Jehan-Guillaume (ioguix) de Rorthais <
iog...@free.fr> wrote:

> On Mon, 15 Oct 2018 09:46:47 +0200
> Laurenz Albe  wrote:
>
> > Raghavendra Rao J S V wrote:
> > > Is there any impact if  "#wal_keep_segments = 0 " and
> "checkpoint_segments
> > > = 128" postgresql.conf file. If yes,what is the imapct?
> >
> > Yes.
> > - You will have fewer checkpoints requested by data modification
> activity.
> > - Crash recovery might take longer.
>
> And considering wal_keep_segments, there is no impact on perf/recovery.
> This
> mostly related to the standby replication lag allowed and some other
> solutions
> exists (slots, archiving).
>


-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Is there any impact if "#wal_keep_segments = 0 " and "checkpoint_segments = 128" postgresql.conf file.

2018-10-15 Thread Raghavendra Rao J S V
Hi All,

Is there any impact if  "#wal_keep_segments = 0 " and "checkpoint_segments
= 128" postgresql.conf file. If yes,what is the imapct?

*checkpoint_segments = 128* # in logfile segments, min 1, 16MB each
#checkpoint_timeout = 5min # range 30s-1h
#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0
#checkpoint_warning = 30s # 0 disables
*#wal_keep_segments = 0* # in logfile segments, 16MB each; 0 disables
wal_level = archive # minimal, archive, or hot_standby
*archive_mode = off* # allows archiving to be done

-- 
Regards,
Raghavendra Rao J S V


Re: Max number of WAL files in pg_xlog directory for Postgres 9.2 version

2018-10-15 Thread Raghavendra Rao J S V
Hi John,

As you said, I have gone through the document. Which one is correct [(2 +
checkpoint_completion_target) * checkpoint_segments + 1 *or*
checkpoint_segments + wal_keep_segments
<https://www.postgresql.org/docs/9.2/static/runtime-config-replication.html#GUC-WAL-KEEP-SEGMENTS>
+
1 files] for 9.2 PostgreSQL?

In my environment we have kept *wal_keep_segments* and *checkpoint_segments*
as below. Will it cause any negative impact?

*checkpoint_segments = 128* # in logfile segments, min 1, 16MB each
#checkpoint_timeout = 5min # range 30s-1h
#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0
#checkpoint_warning = 30s # 0 disables
*#wal_keep_segments = 0* # in logfile segments, 16MB each; 0 disables

++

There will always be at least one WAL segment file, and will normally not
be more than (2 + checkpoint_completion_target) * checkpoint_segments + 1
or checkpoint_segments + wal_keep_segments
<https://www.postgresql.org/docs/9.2/static/runtime-config-replication.html#GUC-WAL-KEEP-SEGMENTS>
+
1 files. Each segment file is normally 16 MB (though this size can be
altered when building the server). You can use this to estimate space
requirements for WAL. Ordinarily, when old log segment files are no longer
needed, they are recycled (renamed to become the next segments in the
numbered sequence). If, due to a short-term peak of log output rate, there
are more than 3 * checkpoint_segments + 1 segment files, the unneeded
segment files will be deleted instead of recycled until the system gets
back under this limit.

++



Regards,
Raghavendra Rao


On Wed, 5 Sep 2018 at 23:23, Johnes Castro  wrote:

> 1 wal by default occupies 16MB.
> The parameter in version 9.2 that controls this is: wal_keep_segments
>
>
> By setting the parameter to 10, the maximum size of the US pg_xlog will be
> 160MB.
>
> Best Regards,
> Johnes Castro
>
>
> --
> *De:* Johnes Castro 
> *Enviado:* quarta-feira, 5 de setembro de 2018 15:48
> *Para:* Raghavendra Rao J S V; pgsql-general@lists.postgresql.org
> *Assunto:* RE: Max number of WAL files in pg_xlog directory for Postgres
> 9.2 version
>
> Hi,
>
> This page in the documentation can help you.
> https://www.postgresql.org/docs/9.2/static/wal-configuration.html
>
> Best Regards,
> Johnes Castro
> PostgreSQL: Documentation: 9.2: WAL Configuration
> <https://www.postgresql.org/docs/9.2/static/wal-configuration.html>
> 29.4. WAL Configuration. There are several WAL-related configuration
> parameters that affect database performance.This section explains their
> use. Consult Chapter 18 for general information about setting server
> configuration parameters.. Checkpoints are points in the sequence of
> transactions at which it is guaranteed that the heap and index data files
> have been updated with all information ...
> www.postgresql.org
>
> --
> *De:* Raghavendra Rao J S V 
> *Enviado:* quarta-feira, 5 de setembro de 2018 15:39
> *Para:* pgsql-general@lists.postgresql.org
> *Assunto:* Max number of WAL files in pg_xlog directory for Postgres 9.2
> version
>
> Hi All,
>
> We are using postgres 9.2 verstion database.
>
> Please let me know, how many max number of wal files in pg_xlog directory?
>
> What is the formul. I am seeing different formulas. Could you provide me
> which decides number of max WAL files in PG_XLOG directory for Postgres 9.2
> Database,please?
>
>
> --
> Regards,
> Raghavendra Rao J S V
>
>

-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Re: Please let me know which configuration setting we need to modify to speedup the pg_dump backup.

2018-10-10 Thread Raghavendra Rao J S V
Thank you very much for your prompt response Christopher.

On Thu 11 Oct, 2018, 8:41 AM Christopher Browne,  wrote:

> On Wed, Oct 10, 2018, 10:32 PM Raghavendra Rao J S V <
> raghavendra...@gmail.com> wrote:
>
>> Hi All,
>>
>> pg_dump is taking more time. Please let me know which configuration
>> setting we need to modify to speedup the pg_dump backup.We are using 9.2
>> version on Centos Box.
>>
>
> There certainly isn't a configuration parameter to say "make PG dump
> faster."
>
> - It is possible that it is taking longer to backup the database because
> the database has grown in size.  If you have a lot more data captured, this
> would be a natural consequence, that you need simply to accept.
>
> But there are additional possibilities...
>
> - Perhaps the database is bloated because an update pattern is leading to
> a lot of dead data being left behind.  In that case some tables need to be
> vacuumed much more often and you should look into the auto vacuum
> configuration.
>
> - Perhaps the database has some table that contains a lot of obsolete
> data.  This would depend heavily on the nature of your application.
>
> You should look to see what data you are collecting that is not of ongoing
> value.  That may represent data that you should trim out of the database.
> That should improve the amount of time it takes to do a backup of the
> database.
>
>>


Re: Please let me know which configuration setting we need to modify to speedup the pg_dump backup.

2018-10-10 Thread Raghavendra Rao J S V
We are using postgresql 9.2. It doesn't contain the option.

Please guide me any other approaches to improve the performance of pg_dump.

On Thu 11 Oct, 2018, 8:05 AM Pavan Teja, 
wrote:

> Hi,
>
> You can use -j jobs option to speed up the process.
>
> Hope it works.
>
> Regards,
> Pavan
>
> On Thu, Oct 11, 2018, 8:02 AM Raghavendra Rao J S V <
> raghavendra...@gmail.com> wrote:
>
>> Hi All,
>>
>> pg_dump is taking more time. Please let me know which configuration
>> setting we need to modify to speedup the pg_dump backup.We are using 9.2
>> version on Centos Box.
>>
>> --
>> Regards,
>> Raghavendra Rao J S V
>>
>>


Please let me know which configuration setting we need to modify to speedup the pg_dump backup.

2018-10-10 Thread Raghavendra Rao J S V
Hi All,

pg_dump is taking more time. Please let me know which configuration setting
we need to modify to speedup the pg_dump backup.We are using 9.2 version on
Centos Box.

-- 
Regards,
Raghavendra Rao J S V


pg_dump: [archiver (db)] query failed: FATAL: semop(id=10649641) failed: Identifier removed

2018-10-08 Thread Raghavendra Rao J S V
Receiving below error while taking the backup using pg_dump. Please help me
why and how to resolve this.


pg_dump: [archiver (db)] query failed: ERROR:  could not open relation with
OID 14132724
pg_dump: [archiver (db)] query was: SELECT
pg_catalog.pg_get_viewdef('14132724'::pg_catalog.oid) AS viewdef
pg_dump: [archiver (db)] query failed: FATAL:  semop(id=10649641) failed:
Identifier removed
FATAL:  semop(id=10649641) failed: Identifier removed
FATAL:  semop(id=10649641) failed: Identifier removed
FATAL:  semop(id=10649641) failed: Identifier removed
pg_dump: [archiver (db)] query was: DECLARE bloboid CURSOR FOR SELECT oid
FROM pg_largeobject_metadata
pg_dump: [archiver (db)] query failed: FATAL:  semop(id=12943400) failed:
Identifier removed
FATAL:  semop(id=12943400) failed: Identifier removed
FATAL:  semop(id=12943400) failed: Identifier removed
FATAL:  semop(id=12943400) failed: Identifier removed
pg_dump: [archiver (db)] query was: DECLARE bloboid CURSOR FOR SELECT oid
FROM pg_largeobject_metadata
pg_dump: [archiver (db)] query failed: FATAL:  semop(id=15466542) failed:
Identifier removed
FATAL:  semop(id=15466542) failed: Identifier removed
FATAL:  semop(id=15466542) failed: Identifier removed
FATAL:  semop(id=15466542) failed: Identifier removed
pg_dump: [archiver (db)] query was: DECLARE bloboid CURSOR FOR SELECT oid
FROM pg_largeobject_metadata
pg_dump: [archiver (db)] query failed: FATAL:  semop(id=17432611) failed:
Identifier removed
FATAL:  semop(id=17432611) failed: Identifier removed
FATAL:  semop(id=17432611) failed: Identifier removed
FATAL:  semop(id=17432611) failed: Identifier removed
pg_dump: [archiver (db)] query was: DECLARE bloboid CURSOR FOR SELECT oid
FROM pg_largeobject_metadata
pg_dump: [archiver (db)] query failed: FATAL:  semop(id=19759139) failed:
Identifier removed
FATAL:  semop(id=19759139) failed: Identifier removed
FATAL:  semop(id=19759139) failed: Identifier removed
FATAL:  semop(id=19759139) failed: Identifier removed
pg_dump: [archiver (db)] query was: DECLARE bloboid CURSOR FOR SELECT oid
FROM pg_largeobject_metadata

-- 
Regards,
Raghavendra Rao J S V


pg_controldata: could not read file "/opt/postgres/9.2/data//global/pg_control": Success

2018-10-07 Thread Raghavendra Rao J S V
*Hi All,*

*archive_mode *is turned *on *unfortunately in my Postgres 9.2 database.

Due to that disk space is full 100%. We are facing below problem when we
try to start the database.

*PANIC: could not read from control file:Success*

Please help me how to handle this situation.



*Log files contians the infomration as below:-*

2018-10-08 05:27:44.262 UTC,,,27688,,5bbaead0.6c28,1,,2018-10-08 05:27:44
UTC,,0,LOG,0,"database system was interrupted while in recovery at
2018-10-03 15:01:03 UTC",,"This probably means that some data is corrupted
and you will have to use the last backup for recovery.",,,""
2018-10-08 05:27:44.262 UTC,,,27688,,5bbaead0.6c28,2,,2018-10-08 05:27:44
UTC,,0,LOG,0,"database system was not properly shut down; automatic
recovery in progress",""
2018-10-08 05:27:44.265 UTC,,,27688,,5bbaead0.6c28,3,,2018-10-08 05:27:44
UTC,,0,LOG,0,"redo starts at 93/775816B0",""
2018-10-08 05:27:44.514 UTC,,,27688,,5bbaead0.6c28,4,,2018-10-08 05:27:44
UTC,,0,FATAL,53100,"could not extend file ""base/77017/160045"": wrote only
4096 of 8192 bytes at block 278",,"Check free disk space.",,,"xlog redo
insert(init): rel 1663/77017/160045; tid 278/1"""
2018-10-08 05:27:44.517 UTC,,,27686,,5bbaead0.6c26,1,,2018-10-08 05:27:44
UTC,,0,LOG,0,"startup process (PID 27688) exited with exit code
1",""
2018-10-08 05:27:44.517 UTC,,,27686,,5bbaead0.6c26,2,,2018-10-08 05:27:44
UTC,,0,LOG,0,"aborting startup due to startup process
failure",""

-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Re: We are facing "PANIC: could not read from control file:Success error while starting the database.

2018-10-04 Thread Raghavendra Rao J S V
On Fri, 5 Oct 2018 at 07:06, Thomas Munro 
wrote:

> On Fri, Oct 5, 2018 at 4:29 AM Raghavendra Rao J S V
>  wrote:
> > PANIC: could not read from control file:Success
>
> That means that the pg_control file is the wrong size.  What size is
> it?  What filesystem is this, that allowed an out-of-space condition
> to result in a file being truncated?  Normally we only overwrite that
> file, so after creation it should stay the same size.
>

Size of the pg_control file is 42kb. We are using "CentOS Linux release
7.3.1611 (Core)".

Permissions and size of the file are present as expected.

Kindly guide me how to handle this kind of error?

Log file showing errors as below.



   Kindly guide me how to handle this kind of error?


> --
> Thomas Munro
> http://www.enterprisedb.com
>


-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


We are facing "PANIC: could not read from control file:Success error while starting the database.

2018-10-04 Thread Raghavendra Rao J S V
Hi All,

*archive_mode *is turned *on *unfortunately in my Postgres 9.2 database.

Due to that disk space is full 100%. We have removed few old xlog files.
Now space is available.But still we are facing below problem when we try to
start the database.

*PANIC: could not read from control file:Success*

Please help me to resolve the above error.

-- 
Regards,
Raghavendra Rao J S V


Re: How to maintain the csv log files in pg_log directory only for past 30 days

2018-09-28 Thread Raghavendra Rao J S V
Hi All,

Hope you all are recommending below settings to maintain only max 30 days
logs in *pg_log* directory. Please correct me if I am wrong.

log_filename = 'postgresql-%d.log'
log_truncate_on_rotation = 'on',

Regards,
Raghavendra Rao

On Sat, 29 Sep 2018 at 04:24, Michael Paquier  wrote:

> On Fri, Sep 28, 2018 at 06:19:16AM -0700, Adrian Klaver wrote:
> > If log_truncate_on_rotation = 'on', correct?
>
> Yup, thanks for precising.
> --
> Michael
>


-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Re: How to maintain the csv log files in pg_log directory only for past 30 days

2018-09-27 Thread Raghavendra Rao J S V
Thanks for the prompt response.

On Fri 28 Sep, 2018, 10:55 AM Michael Paquier,  wrote:

> On Fri, Sep 28, 2018 at 10:33:30AM +0530, Raghavendra Rao J S V wrote:
> > Log file will be generated in *csv* format at *pg_log* directory in our
> > PostgreSQL. Every day we are getting one log file. We would like to
> > maintain only max 30 days. Which setting need to modify by us in
> > “postgresql.conf” in order to recycle the log files after 30 days.
>
> If you use for example log_filename = 'postgresql-%d.log', then the
> server uses one new file every day.  This truncates the contents from
> the last month automatically.
> --
> Michael
>


How to maintain the csv log files in pg_log directory only for past 30 days

2018-09-27 Thread Raghavendra Rao J S V
Hi All,

Log file will be generated in *csv* format at *pg_log* directory in our
PostgreSQL. Every day we are getting one log file. We would like to
maintain only max 30 days. Which setting need to modify by us in
“postgresql.conf” in order to recycle the log files after 30 days.
-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Which is the most stable PostgreSQL version yet present for CentOS 7?

2018-09-19 Thread Raghavendra Rao J S V
Hi All,

Which is the most stable PostgreSQL version yet present for CentOS 7?

-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Max number of WAL files in pg_xlog directory for Postgres 9.2 version

2018-09-05 Thread Raghavendra Rao J S V
Hi All,

We are using postgres 9.2 verstion database.

Please let me know, how many max number of wal files in pg_xlog directory?

What is the formul. I am seeing different formulas. Could you provide me
which decides number of max WAL files in PG_XLOG directory for Postgres 9.2
Database,please?


-- 
Regards,
Raghavendra Rao J S V


How to search particular line/text code in all Postgres all database object's

2018-08-27 Thread Raghavendra Rao J S V
Hi All,

How to search particular line/text code in all Postgres all database
object's like functions,triggers,views etc ?

Is there any data dictionary table in Postgres?

Eg:- My requirement is , I want to found data related to employee table in
any function, trigger,view etc.

Kindly help me.
-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Reeving an error while taking the backup using "pg_basebackup" utility.

2018-08-27 Thread Raghavendra Rao J S V
Hi All,

We are using below command to take the backup of the database.

*$PGHOME/bin/pg_basebackup -p 5433 -U postgres -P -v -x --format=tar --gzip
--compress=1 --pgdata=- -D /opt/rao *

While taking the backup we have received below error.

transaction log start point: 285/8F80

pg_basebackup: could not get transaction log end position from server:
FATAL:  requested WAL segment 00010285008F has already been
removed

Please guide me why and how to handle this error. Do you want me to change
any of the option in my pg_basebackup command let me know.

Please clarify me what it means *--pgdata=--D* in my above * pg_basebackup *
command.
-- 
Regards,
Raghavendra Rao J S V


Re: Size of the table is growing abnormally in my database.

2018-08-25 Thread Raghavendra Rao J S V
Ok, thanks.

On Sun 26 Aug, 2018, 10:46 AM Paul Carlucci, 
wrote:

> There's a handful of hidden columns like Xmin and Xmax per row that you're
> not accounting for, header info per page, reserve space, free space... The
> physical size on disk is reasonable.
>
> Otherwise you can reduce the number of rows by cleaning up and moving out
> old data, reduce the width of each row by getting rid of any unused columns
> or switching to narrower data types, or drop unused indexes.  If none of
> that works for you then you're going to have to adjust your disk budget.
>
> On Sun, Aug 26, 2018, 12:37 AM Raghavendra Rao J S V <
> raghavendra...@gmail.com> wrote:
>
>> Thank you very much for your prompt response.
>>
>> Please guide me below things.
>>
>> How to check rows got corrupted?
>>
>> How to check table got corrupted?
>>
>> How to check which row is occupied more space in the table?
>>
>> Is this expected?
>>
>> [image: image.png]
>>
>> On Sun, 26 Aug 2018 at 09:46, Adrian Klaver 
>> wrote:
>>
>>> On 08/25/2018 08:36 PM, Raghavendra Rao J S V wrote:
>>> > Hi All,
>>> >
>>> > One of our database size is 50gb. Out of it one of the table has
>>> > 149444622 records. Size of that table is 14GB and its indexes size is
>>> 16GB.
>>> > Total size of the table and its indexes are 30GB. I have perfomred the
>>> > below steps on that table.
>>> >
>>> > reindex table table_name;
>>> >
>>> > vacuum full verbose analyze on table_name;
>>> >
>>> > But still the size of the table and its indexes size are not reduced.
>>> > Please guid me. How to proceed further.
>>>
>>> Rough approximation:
>>>
>>> 14,000,000,000 / 150,000,000 = 93 bytes/record.
>>>
>>> I am not seeing an issue. If you want to reduce the size of the table
>>> remove rows.
>>>
>>>
>>> >
>>> > Structure of the table as below.
>>> >
>>> > Column   |   Type   | Modifiers | Storage |
>>> > Stats target | Description
>>> >
>>> -+--+---+-+--+-
>>> >   col1| bigint   |   | plain   |  |
>>> >   col2 | double precision |   | plain   |  |
>>> >   col3| double precision |   | plain   |  |
>>> >   col4| double precision |   | plain   |  |
>>> >   col5| double precision |   | plain   |  |
>>> >   col6date| date |   | plain   |  |
>>> >   tkey | integer  |   | plain   |  |
>>> >   cid | integer  |   | plain   |  |
>>> >   rtypeid | integer  |   | plain   |          |
>>> >   rid | integer  |   | plain   |  |
>>> >   ckey | bigint   |   | plain   |  |
>>> > Indexes:
>>> >  "idx_tab_cid" btree (cid)
>>> >  "idx_tab_ckey" btree (ckey)
>>> >  "idx_tab_col6date" btree (col6date)
>>> >  "idx_tab_rid" btree (rid)
>>> >  "idx_tab_rtype_id" btree (rtypid)
>>> >  "idx_tab_tkey" btree (tkey)
>>> >
>>> >
>>> > --
>>> > Regards,
>>> > Raghavendra Rao J S V
>>> > Mobile- 8861161425
>>>
>>>
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>>>
>>
>>
>> --
>> Regards,
>> Raghavendra Rao J S V
>> Mobile- 8861161425
>>
>


Re: Size of the table is growing abnormally in my database.

2018-08-25 Thread Raghavendra Rao J S V
Thank you very much for your prompt response.

Please guide me below things.

How to check rows got corrupted?

How to check table got corrupted?

How to check which row is occupied more space in the table?

Is this expected?

[image: image.png]

On Sun, 26 Aug 2018 at 09:46, Adrian Klaver 
wrote:

> On 08/25/2018 08:36 PM, Raghavendra Rao J S V wrote:
> > Hi All,
> >
> > One of our database size is 50gb. Out of it one of the table has
> > 149444622 records. Size of that table is 14GB and its indexes size is
> 16GB.
> > Total size of the table and its indexes are 30GB. I have perfomred the
> > below steps on that table.
> >
> > reindex table table_name;
> >
> > vacuum full verbose analyze on table_name;
> >
> > But still the size of the table and its indexes size are not reduced.
> > Please guid me. How to proceed further.
>
> Rough approximation:
>
> 14,000,000,000 / 150,000,000 = 93 bytes/record.
>
> I am not seeing an issue. If you want to reduce the size of the table
> remove rows.
>
>
> >
> > Structure of the table as below.
> >
> > Column   |   Type   | Modifiers | Storage |
> > Stats target | Description
> >
> -+--+---+-+--+-
> >   col1| bigint   |   | plain   |  |
> >   col2 | double precision |   | plain   |  |
> >   col3| double precision |   | plain   |  |
> >   col4| double precision |   | plain   |  |
> >   col5| double precision |   | plain   |  |
> >   col6date| date |   | plain   |  |
> >   tkey | integer  |   | plain   |  |
> >   cid | integer  |   | plain   |  |
> >   rtypeid | integer  |   | plain   |  |
> >   rid | integer  |   | plain   |  |
> >   ckey | bigint   |   | plain   |  |
> > Indexes:
> >  "idx_tab_cid" btree (cid)
> >      "idx_tab_ckey" btree (ckey)
> >  "idx_tab_col6date" btree (col6date)
> >      "idx_tab_rid" btree (rid)
> >  "idx_tab_rtype_id" btree (rtypid)
> >  "idx_tab_tkey" btree (tkey)
> >
> >
> > --
> > Regards,
> > Raghavendra Rao J S V
> > Mobile- 8861161425
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Size of the table is growing abnormally in my database.

2018-08-25 Thread Raghavendra Rao J S V
Hi All,

One of our database size is 50gb. Out of it one of the table has 149444622
records. Size of that table is 14GB and its indexes size is 16GB.
Total size of the table and its indexes are 30GB. I have perfomred the
below steps on that table.

reindex table table_name;

vacuum full verbose analyze on table_name;

But still the size of the table and its indexes size are not reduced.
Please guid me. How to proceed further.

Structure of the table as below.

   Column   |   Type   | Modifiers | Storage | Stats
target | Description
-+--+---+-+--+-
 col1 | bigint   |   | plain   |  |
 col2 | double precision |   | plain   |  |
 col3 | double precision |   | plain   |  |
 col4 | double precision |   | plain   |  |
 col5 | double precision |   | plain   |  |
 col6date | date |   | plain   |  |
 tkey| integer  |   | plain   |  |
 cid  | integer  |   | plain   |  |
 rtypeid  | integer  |   | plain   |  |
 rid  | integer  |   | plain   |  |
 ckey| bigint   |   | plain   |  |
Indexes:
"idx_tab_cid" btree (cid)
"idx_tab_ckey" btree (ckey)
"idx_tab_col6date" btree (col6date)
"idx_tab_rid" btree (rid)
"idx_tab_rtype_id" btree (rtypid)
"idx_tab_tkey" btree (tkey)


-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Which are the settings need to be modified inorder to implement only connection pool mechanisam in pgpool

2018-08-23 Thread Raghavendra Rao J S V
Hi All,

In order to implment connection pool mechanisam, we have decided to use
pg_bouncer. But due to some dependent software's like openssl we are not
going to use pg_bouncer.

Therefore we are planning to use pgpool to implement the connection pool
mechanisam in my applicaiton.

We don't have master and stand by servers.We have only one server.

Is it advisable to use the pgpool only as connection pool mechanisam
without using loadbalancing,replication etc...?

Which are the settings need to be modified inorder to implement only
connection pool mechanisam in pgpool?

Possilbe suggest me the steps / provide me a URL to implement?

-- 
Regards,
Raghavendra Rao J S V


[no subject]

2018-08-23 Thread Raghavendra Rao J S V
Hi All,

In order to implment connection pool mechanisam, we have decided to use
pg_bouncer. But due to some dependent software's like openssl we are not
going to use pg_bouncer.

Therefore we are planning to use pgpool to implement the connection pool
mechanisam in my applicaiton.

We don't have master and stand by servers.We have only one server.

Is it advisable to use the pgpool only as connection pool mechanisam
without using loadbalancing,replication etc...?

Which are the settings need to be modified inorder to implement only
connection pool mechanisam in pgpool?

Possilbe suggest me the steps / provide me a URL to implement?

-- 
Regards,
Raghavendra Rao J S V


How to check whether table is busy or free before running the ALTER or creating TRIGGER on that table

2018-08-21 Thread Raghavendra Rao J S V
*Hi All,*

We have thousands of tables. Out of these tables we have few tables. Which
are busy some times. If I execute any ALTER statement or creating trigger
on those tables I am unable to do it. How to check whether table is busy or
free before running the *ALTER/DDL *or creating *TRIGGER *on that table in
postgresql database.

-- 
Regards,
Raghavendra Rao J S V


Re: How to create a log file in pg_log directory for each execution of my function.

2018-08-19 Thread Raghavendra Rao J S V
Thanks a lot.

On Sun 19 Aug, 2018, 11:09 PM Adrian Klaver, 
wrote:

> On 08/19/2018 10:22 AM, Raghavendra Rao J S V wrote:
> > Hi All,
> >
> > I have a log file as "
> > */opt/postgres/9.2/data/pg_log/postgresql-2018-08-19.csv*".  Due to
> > "*log_rotation_age=1d*", one log file will be created for me in this
> > pg_log directory on everyday.
> >
> > While I am debugging a particular user defined function which contains
> > the lot of raise notice  messages , I would like to create a new log
> > file instead of appending the logs to existing one. How to achieve this?
> >
> > Like this each and every execution of my function, I wold like to get a
> > new log file. How to do this.
>
>
> https://www.postgresql.org/docs/10/static/functions-admin.html#FUNCTIONS-ADMIN-SIGNAL
>
> pg_rotate_logfile() boolean Rotate server's log file
>
> >
> > --
> > Regards,
> > Raghavendra Rao J S V
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


How to create a log file in pg_log directory for each execution of my function.

2018-08-19 Thread Raghavendra Rao J S V
Hi All,

I have a log file as "
*/opt/postgres/9.2/data/pg_log/postgresql-2018-08-19.csv*".  Due to "
*log_rotation_age=1d*", one log file will be created for me in this pg_log
directory on everyday.

While I am debugging a particular user defined function which contains the
lot of  raise notice  messages , I would like to create a new log file
instead of appending the logs to existing one. How to achieve this?

Like this each and every execution of my function, I wold like to get a new
log file. How to do this.

-- 
Regards,
Raghavendra Rao J S V


Re: Difference between "autovacuum_naptime" and "autovacuum_vacuum_cost_delay"?

2018-08-16 Thread Raghavendra Rao J S V
Thank you very much for your response.

Could you clarify me below things,please?

What is the difference between "autovacuum_naptime" and
"autovacuum_vacuum_cost_delay"?

What is the difference between "autovacuum launcher process" and
"autovacuum worker process"?

How to control the number of "autovacuum launcher process" and "autovacuum
worker process"?

Does "autovacuum launcher process" sleeps? If yes,which parameter controls
it?

Does "autovacuum worker process" sleeps? If yes,which parameter controls it?

Regards,
Raghavendra Rao


On 17 August 2018 at 09:30, Joshua D. Drake  wrote:

> On 08/16/2018 06:10 PM, Raghavendra Rao J S V wrote:
>
> Hi All,
>
> I have gone through several documents but I am still have confusion
> related to "autovacuum_naptime" and "autovacuum_vacuum_cost_delay". Could
> you clarify me with an example.
>
> When Auto vacuum worker process will start?
>
>
> Autovacuum checks for relations that need to be vacuumed/analyzed every
> "naptime"
>
> When Auto vacuum worker process will stop?
>
>
> When it is done with the list of relations that needed work that were
> found at the launch of "naptime"
>
>
> Does Auto vacuum worker process will sleep like Auto vacuum launcher
> process ?
>
>
> The launcher process sleeps for naptime, then wakes up to check what needs
> to be worked on
>
>
> What is the difference between Auto vacuum launcher process and Auto
> vacuum worker process?
>
>
> The launcher is the process that spawns the worker processes (I think).
>
> JD
>
>
>
>
> --
> Regards,
> Raghavendra Rao J S V
>
>
> --
> Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
> ***  A fault and talent of mine is to tell it exactly how it is.  ***
> PostgreSQL centered full stack support, consulting and development.
> Advocate: @amplifypostgres || Learn: https://postgresconf.org
> * Unless otherwise stated, opinions are my own.   *
>
>


-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Difference between "autovacuum_naptime" and "autovacuum_vacuum_cost_delay"?

2018-08-16 Thread Raghavendra Rao J S V
Hi All,

I have gone through several documents but I am still have confusion related
to "autovacuum_naptime" and "autovacuum_vacuum_cost_delay". Could you
clarify me with an example.

When Auto vacuum worker process will start?

When Auto vacuum worker process will stop?

Does Auto vacuum worker process will sleep like Auto vacuum launcher
process ?

What is the difference between Auto vacuum launcher process and Auto vacuum
worker process?



-- 
Regards,
Raghavendra Rao J S V


Re: is there any adverse effect on DB if I set autovacuum scale factor to zero?

2018-08-13 Thread Raghavendra Rao J S V
Hi Tomas,

Thank you very much for your response.

As we  know table becomes a candidate for autovacuum  process based on
below formula.


*Autovacuum VACUUM thresold for a table = autovacuum_vacuum_scale_factor *
number of tuples + autovacuum_vacuum_threshold*



*Current settings in my database are as follows.*


*autovacuum_vacuum_scale_factor = 0.1 *

*autovacuum_vacuum_threshold = 40*



Due to above formula the dead tuples are accumulating based on the number
of live tuples as show below picture.


select relname,n_live_tup,n_dead_tup,(n_live_tup*.1+40)
expected_to_autovacuum,* from pg_stat_user_tables
where  n_dead_tup>800
order by n_live_tup desc
limit 100;




In order to avoid the dead tuples accumulation I wold like to change the
auto vacuum  settings in *"postgresql.conf"* as below.

*autovacuum_vacuum_scale_factor = 0.01*

* autovacuum_vacuum_threshold = 100*


*Kindly guide me your views. Does it cause any adverse effect on DB.*

Regards,
Raghavendra Rao



On 13 August 2018 at 18:05, Tomas Vondra 
wrote:

>
>
> On 08/13/2018 11:07 AM, Raghavendra Rao J S V wrote:
>
>> Hi All,
>>
>> We are using postgres *9.2*  version on *Centos *operating system.  We
>> have around *1300+* tables.We have following auto vacuum settings are
>> enables. Still few of the tables(84 tables) which are always busy are not
>> vacuumed.Dead tuples in those tables are more than 5000.  Due to that
>> tables are bloating and observed few areas has performance degradation.
>>
>>
> You don't say how large the tables are, so it's impossible to say whether
> 5000 dead tuples is excessive or not. IMHO it's a negligible amount and
> should not lead to excessive bloat or issues.
>
> A certain amount of wasted is expected - it's a trade-off between
> immediate and delayed cleanup. If you delay the cleanup a bit, it's going
> to be more efficient overall.
>
> It's also unclear why the tables are not vacuumed - it may easily be due
> to all the autovacuum workers being constantly busy, unable to cleanup all
> tables in a timely manner. In that case lowering the threshold is not going
> to help, on the contrary.
>
> regards
>
> --
> Tomas Vondra  http://www.2ndQuadrant.com
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>



-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


is there any adverse effect on DB if I set autovacuum scale factor to zero?

2018-08-13 Thread Raghavendra Rao J S V
Hi All,

We are using postgres *9.2*  version on *Centos *operating system.  We have
around *1300+* tables.We have following auto vacuum settings are enables.
Still few of the tables(84 tables) which are always busy are not
vacuumed.Dead tuples in those tables are more than 5000.  Due to that
tables are bloating and observed few areas has performance degradation.


autovacuum = on
log_autovacuum_min_duration = 100
autovacuum_max_workers = 5
autovacuum_naptime = 1min
autovacuum_vacuum_threshold = 40
autovacuum_analyze_threshold = 20
autovacuum_vacuum_scale_factor = 0.1
autovacuum_analyze_scale_factor = 0.05
autovacuum_freeze_max_age = 2
autovacuum_vacuum_cost_delay = 30ms
autovacuum_vacuum_cost_limit = 1200
# - Cost-Based Vacuum Delay -

#vacuum_cost_delay = 0ms # 0-100 milliseconds
#vacuum_cost_page_hit = 1 # 0-1 credits
#vacuum_cost_page_miss = 10 # 0-1 credits
#vacuum_cost_page_dirty = 20# 0-1 credits
vacuum_cost_limit = 200 # 1-1 credits


In order to avoid the table bloating and performance degradation,we would
like to set the* ' autovacuum_vacuum_scale_factor'(zero) * and
*'autovacuum_vacuum_threshold
' (200)* settings for the busy tables as below. Please let me know is there
any adverse effect on DB if I set  autovacuum scale factor to zero for
certain tables. If yes, what is the effect and how to test.

ALTER TABLE cmdevice SET (autovacuum_vacuum_scale_factor = 0,
autovacuum_vacuum_threshold = 200);

Kindly let me know the role of  *autovacuum_vacuum_cost_delay* and
*autovacuum_vacuum_cost_limit* settings .

Regards,
Raghavendra Rao


How to avoid dead rows in tables.

2018-08-06 Thread Raghavendra Rao J S V
Hi All,


auto vacuum is enabled in our database. But few of the tables contains the
dead tuples more than 5,000 records. Number of dead rows are keep on
increasing day by day if we didn’t perform the vacuum full. Monthly once we
are performing vacuum full by stopping application server process. Later we
are restarting the application server process.



How to avoid accumulating the dead tuples for those tables. Is there any
other approach to remove the dead tuple’s without vacuum full/down time.



Note:- We are using the postgres version 9.2


-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Is it ok to run vacuum full verbose command for live database for the tables which has more dead tuples?

2018-07-18 Thread Raghavendra Rao J S V
We have thousands of tables. But out of those tables, around 20 to 40
tables are always busy due to that those tables are bloating.

In order to avoid this we are running a shell script which performs vacuum
full on the tables which has more than ten thousand dead tuples. While
running this we are stopping all application processors and running vacuum
full on the tables which has more dead tuples.


   1. Is it ok to run *vacuum full verbose* command for live database for
   the tables which has more dead tuples(greater than)?
   2. Does it cause any *adverse *effect?


Please clarify me. Thanks in advance.

-- 
Regards,
Raghavendra Rao J S V


duplicate key value violates unique constraint "pg_statistic_relid_att_inh_index" while performing full vacuum on the database

2018-05-09 Thread Raghavendra Rao J S V
Hi,


While performing vacuum full, I have  received the below highlighted error.
Please guide me how to resolve this issue.


​/opt/postgres/9.2/bin/psql -p 5433 --username=cmuser  cpcm -c "*VACUUM
FULL ANALYZE*;"


ERROR:  *duplicate key value violates unique constraint
"pg_statistic_relid_att_inh_index"*
DETAIL:  Key (starelid, staattnum, stainherit)=(18915, 6, f) already exists.

' pg_statistic ' is a meta data table. Is it ok if I remove one duplicated
record from ' pg_statistic' table?.

-- 
Regards,
Raghavendra Rao J S V


Re: difference between checkpoint_segments and wal_keep_segments in postgres configuration file

2018-04-20 Thread Raghavendra Rao J S V
Thanks a lot.

On Wed 11 Apr, 2018, 9:07 AM Michael Paquier,  wrote:

> On Tue, Apr 10, 2018 at 11:06:54PM +0530, Raghavendra Rao J S V wrote:
> > I am not clear the difference between checkpoint_segments and
> > wal_keep_segments .
> >
> > I would like to now below things. Please explain.Thanks in advance.
> >
> >- Difference  between *checkpoint_segments *and *wal_keep_segments *
> >value
> >- Role  of  *checkpoint_segments *and *wal_keep_segments *
> >- Which one should has higher value.
>
> Documentation is king here.  For checkpoint_segments:
>
> https://www.postgresql.org/docs/9.4/static/runtime-config-wal.html#RUNTIME-CONFIG-WAL-CHECKPOINTS
>
>   Maximum number of log file segments between automatic WAL
>   checkpoints (each segment is normally 16 megabytes). The default is
>   three segments. Increasing this parameter can increase the amount of
>   time needed for crash recovery. This parameter can only be set in
>   the postgresql.conf file or on the server command line.
>
> For wal_keep_segments:
>
> https://www.postgresql.org/docs/9.4/static/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-SENDER
>
>   Specifies the minimum number of past log file segments kept in the
>   pg_xlog directory, in case a standby server needs to fetch them for
>   streaming replication. Each segment is normally 16 megabytes. If a
>   standby server connected to the sending server falls behind by more than
>   wal_keep_segments segments, the sending server might remove a WAL
>   segment still needed by the standby, in which case the replication
>   connection will be terminated. Downstream connections will also
>   eventually fail as a result. (However, the standby server can recover by
>   fetching the segment from archive, if WAL archiving is in use.)
>
> Mentioning checkpoint_segments implies that you are using PostgreSQL 9.4
> or older versions as this has been removed and replaced by max_wal_size
> in 9.5.  You should consider upgrading to a newer version.
>
> Hence the first is used in the context of normal operations to decide
> the frequency of checkpoints when those are triggered by volume.  The
> second can be used with streaming replication to give a standby a higher
> catchup window.  Giving value to one or the other depends on the
> context, and both are usable in completely different circumstances.
> --
> Michael
>


Re: Planning to change autovacuum_vacuum_scale_factor value to zero. Please suggest me if any negative impact.

2018-04-11 Thread Raghavendra Rao J S V
Thank you very much for your prompt response.

I requested in my previous mail as  , planning to make '
*autovacuum_vacuum_scale_factor*' value to *zero *and
*autovacuum_vacuum_threshold
*value to *150 * in postgreconf file.

Are you suggesting me to keep "autovacuum_vacuum_cost_limit"  to zero or "
autovacuum_vacuum_scale_factor" to zero or both? Please clarify me.

Regards,
Raghavendra Rao

On Wed, Apr 11, 2018 at 12:59 PM, Laurenz Albe 
wrote:

> Raghavendra Rao J S V wrote:
> > We are using postgres 9.2  version on Centos operating system.  We have
> around 1300+ tables.
> > We have following auto vacuum settings are enables. Still few of the
> tables which are always busy are not vacuumed. Due to that tables are
> bloating and observed few areas has performance degradation.
> >
> > autovacuum_max_workers = 6
> > autovacuum_naptime = 15s
> > autovacuum_vacuum_threshold = 25
> > autovacuum_analyze_threshold = 10
> > autovacuum_vacuum_scale_factor = 0.1
> > autovacuum_analyze_scale_factor = 0.05
> > autovacuum_vacuum_cost_delay = 10ms
> > autovacuum_vacuum_cost_limit = 1000
> >
> > To avoid the above problem, I am planning to make '
> autovacuum_vacuum_scale_factor' value to zero and
> autovacuum_vacuum_threshold  value to 150. Please suggest me does it have
> any negative impact.
>
> That's an excellent way to keep your database from functioning well.
>
> Rather, raise autovacuum_vacuum_cost_limit, or, more aggressively,
> set autovacuum_vacuum_cost_delay to 0.
>
> It is better to change the settings on individual busy tables than
> changing them globally.
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>



-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Planning to change autovacuum_vacuum_scale_factor value to zero. Please suggest me if any negative impact.

2018-04-10 Thread Raghavendra Rao J S V
 We are using postgres *9.2*  version on *Centos *operating system.  We
have around 1300+ tables.
We have following auto vacuum settings are enables. Still few of the tables
which are always busy are not vacuumed. Due to that tables are bloating and
observed few areas has performance degradation.

autovacuum_max_workers = 6
autovacuum_naptime = 15s
autovacuum_vacuum_threshold = 25
autovacuum_analyze_threshold = 10
autovacuum_vacuum_scale_factor = 0.1
autovacuum_analyze_scale_factor = 0.05
autovacuum_vacuum_cost_delay = 10ms
autovacuum_vacuum_cost_limit = 1000

To avoid the above problem, I am planning to make '
autovacuum_vacuum_scale_factor'
value to zero and  autovacuum_vacuum_threshold  value to 150. Please
suggest me does it have any negative impact.


-- 
Regards,
Raghavendra Rao J S V


difference between checkpoint_segments and wal_keep_segments in postgres configuration file

2018-04-10 Thread Raghavendra Rao J S V
I am not clear the difference between  checkpoint_segments and
wal_keep_segments .

I would like to now below things. Please explain.Thanks in advance.


   - Difference  between *checkpoint_segments *and *wal_keep_segments *
   value
   - Role  of  *checkpoint_segments *and *wal_keep_segments *
   - Which one should has higher value.

-- 
Regards,
Raghavendra Rao J S V


Suggest the best values for the postgres configuration parameters

2018-04-10 Thread Raghavendra Rao J S V
We are using postgres *9.2*  version on *Centos *operating system.

Total ram available is *80GB *. At present we don't have any connection
pool mechanisiam.  Max number of  connections are allowed is 1000.

Could you please suggest the best values for the below configuration
parameters?


   - shared_buffers
   - effective_cache_size
   - work_mem
   - maintenance_work_mem
   - checkpoint_segments
   - wal_keep_segments
   - checkpoint_completion_target
   - Max_prepared_transactions =0

-- 
Regards,
Raghavendra Rao J S V
​​


How to install pgTAP on cenos machine

2018-04-03 Thread Raghavendra Rao J S V
Hi,

How to install pgTAP on Centos machine.? I tried to install but no luck.
Please guide me to proceed further.

-- 
Regards,
Raghavendra Rao J S V


Please suggest the best suited unit test frame work for postgresql database.

2018-03-31 Thread Raghavendra Rao J S V
Good morning.

Please suggest the best suited unit test frame work for postgresql database
and also shared the related documents to understand the framework.

-- 
Regards,
Raghavendra Rao J S V


ERROR: right sibling's left-link doesn't match: block 5 links to 8 instead of expected 2 in index "pg_toast_2619_index"

2018-03-15 Thread Raghavendra Rao J S V
*Hi All,*


*We are facing below error in my postgres 9.2 production database. Please
help us how to resolve and why we are facing this issue and impact of the
issue. *


*ERROR:  right sibling's left-link doesn't match: block 5 links to 8
instead of expected 2 in index "pg_toast_2619_index"*


*CONTEXT:  automatic vacuum of table "qovr.pg_toast.pg_toast_2619"*


-- 
Regards,
Raghavendra Rao J S V


org.postgresql.util.PSQLException: Error could not open file "base/": No such file or directory

2018-03-05 Thread Raghavendra Rao J S V
Hi,

Few of the tables and indexes are bloated though auto vacuum has enables.

Two reclaim the space vacuum, I have ran vacuum full on the larger tables
and also performed reindex on the indexes which are in larger size. Now the
size of the database is in control.

After perfoming the vacuum full and reindexing on larger tables, I am
facing below error.

*org.postgresql.util.PSQLException: Error could not open file "base/": No
such file or directory*

Please guide me how to resolve the above error and let me know does this
has any relation with vacumm full or reindexing operation which are
performed by me.

-- 
Regards,
Raghavendra Rao J S V


Function execution is taking more than 2hrs

2018-02-06 Thread Raghavendra Rao J S V

v_select varchar(5000);

id_error_count int:=0;

rec record;



BEGIN



--This  funciton updates the deviceid column for spcified table using
endpoint_deviceids_barediscovery table after rediscovery

  v_select:='SELECT distinct t2.deviceid_old,t2.deviceid_new

  FROM  '|| p_table
||' t1,endpoint_deviceids_barediscovery t2

WHERE t1.'||p_column||'=t2.deviceid_old

  AND
t2.deviceid_new is not null';





  RAISE NOTICE 'Updation of endpoints with newdeviceid for %
started and query is %',p_table,v_select;

  PERFORM insert_log('INFO' ,'pop_new_deviceid_for_table'
,'Updation of  endpoints with newdeviceid for '||p_table||' started.Query
is '|| v_select);



  FOR rec IN EXECUTE v_selectLOOP



BEGIN

  EXECUTE FORMAT('UPDATE %I set %I = %s where
%I=%s',p_table,p_column,rec.deviceid_new,p_column,rec.deviceid_old);

  EXCEPTION

   WHEN OTHERS THEN


id_error_count:=id_error_count+1;

  RAISE NOTICE
'Error occurred while updating new deviceid column of % table  for deviceid
(%) % using pop_new_deviceid_for_table  %
%',p_table,p_column,rec.deviceid_old, SQLERRM, SQLSTATE;

  END;



  END LOOP;





 EXCEPTION WHEN OTHERS THEN

  RAISE NOTICE 'Error occurred while executing
pop_new_deviceid_for_table for %  table % %', p_table,SQLERRM,
SQLSTATE;


   PERFORM insert_log('ERROR' ,'pop_new_deviceid_for_table' ,'Error
occurred while executing  pop_endpoints_with_old_deviceid for '||p_table||'
table '||SQLSTATE||'  '||SQLERRM);

*END;*

*$$ LANGUAGE plpgsql;*



When I execute select pop_endpoints_with_new_deviceid() it will update 20
tables in single shot. Some of the environments it got completed in 5
minutes and some of the environments it is taking around 2hrs 25 minutes. I
have experienced this issue several times with different environments. But
the data and configuration settings of the all environments are same. There
are no locks in the database while this script is executing.



*Please guide me *



*Sometimes “select pop_endpoints_with_new_deviceid()”  is taking just 5
minutes and some times more than 2hrs 25 minutes. how to narrow down the
issue*



* How to do the bulk update /insert/delete in postgres? Do I need to modify
any configuration parameters in the database?*



*How to tack the time taken by each function in postgres?*


-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Re: pg_basebackup is taking more time than expected

2018-01-15 Thread Raghavendra Rao J S V
I am looking for the help to minimise the time taken by the pg_basebackup
utility.

As informed Earlier we are taking the backup of the database using
pg_basbackup utility using below command.

$PGHOME/bin/pg_basebackup -p 5433 -U postgres -P -v -x --format=tar --gzip
--compress=6 --pgdata=- -D /opt/backup_db

According to our previous discussion, pg_basebackup is not depend on any of
the postgresql configuration parameters. If I go for gzip format we need to
compromise on time.

We are planning to take by following below steps. Please correct me if I am
wrong.


   1. Identify the larger indexes(whose size is above 256MB) and drop those
   indexes. Due to this size of the database will reduce.
   2. Take the backup of the database.
   3. Recreate the indexes on the environment where we created the
   environment which we created using the backup.


I am new to postgres database. Could you  help me to construct the query to
drop and create the indexes, please?


Re: pg_basebackup is taking more time than expected

2018-01-13 Thread Raghavendra Rao J S V
Thank you very much for your prompt response.

I am asking in my previous mail as, Does the pg_basebackup depends on any
of the postgres configuration parameters likes shared
buffer/maintanance_work_memory etc? If yes, which are those configuration
parameters, I need to take care/increase the value?


Please let me know what does this means.

*Please don't top-post on the PG mailing lists.*

*How to get clarifications on my query?*

On Sat, Jan 13, 2018 at 9:52 PM, Stephen Frost  wrote:

> Greetings,
>
> Please don't top-post on the PG mailing lists.
>
> * Raghavendra Rao J S V (raghavendra...@gmail.com) wrote:
> > pg_basebackup utility  depends on which are the parameters?
>
> I'm not sure what you're asking here.
>
> > Is there any possibility to run the pg_basebackup in multi thread?
>
> No, not today.  There's been discussion about making it multi-threaded
> but I seriously doubt that'll happen for v11 at this point.
>
> > To improve the speed of the backup of database using pg_basebackup
> utility
> > we shutdown the database and started alone database services. Till that
> > time other sevices won't run. We observed some improvement but any other
> > things we need to perform to reduce the time taken by the  pg_basebackup
> > utility.
>
> Sure, reducing the load of the system might make pg_basebackup a little
> faster, but seems unlikely to help it a lot in this case, and it means
> you have downtime which might not be ideal.
>
> > We are using below command to take the backup of the database. Any
> > improvements to reduce the time taken by backup statement.
> >
> > $PGHOME/bin/pg_basebackup -p 5433 -U postgres -P -v -x --format=tar
> --gzip
> > --compress=6 --pgdata=- -D /opt/backup_db
>
> Might be faster if you didn't compress it, but, of course, then you
> wouldn't have a compressed backup.
>
> This is exactly the kind of issue that lead to the development of
> pgBackRest.  Larger databases really do need multi-threaded backups and
> there weren't any backup tools for PG which were multi-threaded when we
> started.  There's a few other options now, which is good, but
> pg_basebackup isn't one of them.
>
> Thanks!
>
> Stephen
>



-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Re: pg_basebackup is taking more time than expected

2018-01-13 Thread Raghavendra Rao J S V
Please let me know below details.

pg_basebackup utility  depends on which are the parameters?

Is there any possibility to run the pg_basebackup in multi thread?

To improve the speed of the backup of database using pg_basebackup utility
we shutdown the database and started alone database services. Till that
time other sevices won't run. We observed some improvement but any other
things we need to perform to reduce the time taken by the  pg_basebackup
utility.

We are using below command to take the backup of the database. Any
improvements to reduce the time taken by backup statement.

$PGHOME/bin/pg_basebackup -p 5433 -U postgres -P -v -x --format=tar --gzip
--compress=6 --pgdata=- -D /opt/backup_db



On Fri, Jan 12, 2018 at 6:37 PM, Stephen Frost  wrote:

> Greetings,
>
> * Raghavendra Rao J S V (raghavendra...@gmail.com) wrote:
> > We have database with the size of *425GB*. Out of this 425 GB, Around 40
> to
> > 60% of space occupied by the indexes. Ram allocated to this machine is
> > 32GB  and configuration parameters below.
> [...]
> > In order to take the backup of the database we are using pg_basebackup
> > utility. But it is taking several hours (Around 18hours). Please guide me
> > what are the configuration parameters need to modify to reduce the time
> > taken by the pg_basebackup utility.
>
> Unfortunately, there aren't a lot of great options with pg_basebackup,
> but part of the question is- why is it taking so long?  That looks to be
> a rate of less than 60Mb/s, assuming I did my calculations right, and
> that's pretty slow.  How are you running pg_basebackup?  If you're doing
> compression and the pg_basebackup process is consistently at 100% then
> that's just the rate which a single CPU can compress data for you.  If
> you're running the pg_basebackup across a WAN, then perhaps the
> throughput available is only 60Mb/s.
>
> > Is there any possibility to exclude the index data while taking the
> > pg_basebackup?
>
> This isn't currently possible, no, and it would seriously increase the
> time required to restore the system.  If you aren't concerned about
> restore time at all (though, really, you probably should be) then you
> could consider using pg_dump instead, which can be run in parallel and
> wouldn't include the indexes.  The format is a logical point-in-time
> dump though, so you aren't able to do point-in-time recovery (playing
> WAL forward) and reloading the data and rebuilding the indexes will take
> quite a while.
>
> Lastly, if the issue is that pg_basebackup is single-threaded, or that
> you need multiple TCP connections to get higher throughput, then you
> should consider one of the alternative physical (file-based) backup
> solutions available for PostgreSQL, e.g.: pgBackRest, barman, or WAL-G.
>
> Thanks!
>
> Stephen
>



-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


pg_basebackup is taking more time than expected

2018-01-12 Thread Raghavendra Rao J S V
Hi All,

We have database with the size of *425GB*. Out of this 425 GB, Around 40 to
60% of space occupied by the indexes. Ram allocated to this machine is
32GB  and configuration parameters below.

max_connections = 800
shared_buffers = 9GB
effective_cache_size = 18GB
work_mem = 10MB
maintenance_work_mem = 1536MB
checkpoint_segments = 50
wal_keep_segments = 80
checkpoint_completion_target = 0.9
wal_buffers = 16MB
Max_prepared_transactions =0
synchronous_commit = on

In order to take the backup of the database we are using pg_basebackup
utility. But it is taking several hours (Around 18hours). Please guide me
what are the configuration parameters need to modify to reduce the time
taken by the pg_basebackup utility.

Is there any possibility to exclude the index data while taking the
pg_basebackup?




-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425