On Mon, Jan 15, 2018 at 7:57 AM, Raghavendra Rao J S V <
raghavendra...@gmail.com> wrote:

> I am looking for the help to minimise the time taken by the pg_basebackup
> utility.
>
> As informed Earlier we are taking the backup of the database using
> pg_basbackup utility using below command.
>
> $PGHOME/bin/pg_basebackup -p 5433 -U postgres -P -v -x --format=tar --gzip
> --compress=6 --pgdata=- -D /opt/backup_db
>
> According to our previous discussion, pg_basebackup is not depend on any
> of the postgresql configuration parameters. If I go for gzip format we need
> to compromise on time.
>
> We are planning to take by following below steps. Please correct me if I
> am wrong.
>
>
>    1. Identify the larger indexes(whose size is above 256MB) and drop
>    those indexes. Due to this size of the database will reduce.
>
> ​I'm with Stephen on this one - going into standalone mode and dropping
indexes seems odd...​

I am new to postgres database. Could you  help me to construct the query to
> drop and create the indexes, please?
>

​https://www.postgresql.org/docs/10/static/sql-commands.html

see "CREATE INDEX" and "DROP INDEX" in that listing.​

​You haven't explained the purpose of the backups but its for availability
and you are this concerned about speed you should investing it learning and
install a hot-standby replication configuration.

Additionally, consider whether you can improve the IO of the setup you are
using since, as Stephen said, and I am far from experienced in this area,
18hours for less than 500GB of data seem extraordinarily long.

David J.

Reply via email to