On Fri, Aug 30, 2019 at 4:12 PM Luca Ferrari <fluca1...@gmail.com> wrote:

> On Fri, Aug 30, 2019 at 11:51 AM Durgamahesh Manne
> <maheshpostgr...@gmail.com> wrote:
> >  Logical dump of that table is taking more than 7 hours to be completed
> >
> >  I need to reduce to dump time of that table that has 88GB in size
>
> Good luck!
> I would see two possible solutions to the problem:
> 1) use physical backup and switch to incremental (e..g, pgbackrest)
> 2) partition the table and backup single pieces, if possible
> (constraints?) and be assured it will become hard to maintain (added
> partitions, and so on).
>
> Are all of the 88 GB be written during a bulk process? I guess no, so
> maybe partitioning you can avoid locking the whole dataset and reduce
> contention (and thus time).
>
> Luca
>


Hi respected postgres team

  Are all of the 88 GB be written during a bulk process?
   NO
 Earlier table size was 88gb
 Now table size is about 148 GB
 Is there any way to reduce dump time when i take dump of the table which
has 148gb in size without creating partiton on that table has 148gb in size
?


Regards
Durgamahesh Manne

Reply via email to