I'm trying to troubleshoot the slowness issue with pg_restore and stumbled
across a recent post about pg_restore scanning the whole file :

> "scanning happens in a very inefficient way, with many seek calls and
small block reads. Try strace to see them. This initial phase can take
hours in a huge dump file, before even starting any actual restoration."
see :
https://www.postgresql.org/message-id/E48B611D-7D61-4575-A820-B2C3EC2E0551%40gmx.net

I'm currently having this same issue.

At the early stage of restoration I can see lots of disk writes activities
but as time goes by, disk writes activities are reduced.
I can see the COPY process in postgres but not using any CPU, and the
process that uses CPU are pg_restores.

I can recreate this issue when restoring a specific table to stdout.

ie :
pg_restore -vvvv -t <some_table_at_the> DB.pgdump -f -

If the table is at the bottom of the TOC it will take  hours before I get a
result, but I get an almost immediate result when the table is at the top.
 parallel restore suffers with the same issue where each process has to
perform a scan for each table.

What is the best way to speed up the restore ?


More info about my environment :
pg_restore (PostgreSQL) 17.6

Archive :
; Archive created at 2025-09-16 16:08:28 AEST
;     dbname: DB
;     TOC Entries: 8221
;     Compression: none
;     Dump Version: 1.14-0
;     Format: CUSTOM
;     Integer: 4 bytes
;     Offset: 8 bytes
;     Dumped from database version: 14.15
;     Dumped by pg_dump version: 14.19 (Ubuntu 14.19-1.pgdg22.04+1)

Reply via email to