It's towards the end of this long mailing list thread from a couple of
weeks ago.

https://www.postgrespro.com/list/id/[email protected]

On Thu, Sep 18, 2025 at 8:58 AM R Wahyudi <[email protected]> wrote:

> Hi All,
>
> Thanks for the quick and accurate response!  I never been so happy seeing
> IOwait on my system!
>
> I might be blind as  I can't find information about 'offset' in pg_dump
> documentation.
> Where can I find more info about this?
>
> Regards,
> Rianto
>
> On Wed, 17 Sept 2025 at 13:48, Ron Johnson <[email protected]>
> wrote:
>
>>
>> PG 17 has integrated zstd compression, while --format=directory lets you
>> do multi-threaded dumps.  That's much faster than a single-threaded pg_dump
>> into a multi-threaded compression program.
>>
>> (If for _Reasons_ you require a single-file backup, then tar the
>> directory of compressed files using the --remove-files option.)
>>
>> On Tue, Sep 16, 2025 at 10:50 PM R Wahyudi <[email protected]> wrote:
>>
>>> Sorry for not including the full command - yes , its piping to a
>>> compression command :
>>>  | lbzip2 -n <threadsforbzipgoeshere>--best > <filenamegoeshere>
>>>
>>>
>>> I think we found the issue! I'll do further testing and see how it goes !
>>>
>>>
>>>
>>>
>>>
>>> On Wed, 17 Sept 2025 at 11:02, Ron Johnson <[email protected]>
>>> wrote:
>>>
>>>> So, piping or redirecting to a file?  If so, then that's the problem.
>>>>
>>>> pg_dump directly to a file puts file offsets in the TOC.
>>>>
>>>> This how I do custom dumps:
>>>> cd $BackupDir
>>>> pg_dump -Fc --compress=zstd:long -v -d${db} -f ${db}.dump  2> ${db}.log
>>>>
>>>> On Tue, Sep 16, 2025 at 8:54 PM R Wahyudi <[email protected]> wrote:
>>>>
>>>>> pg_dump was done using the following command :
>>>>> pg_dump -Fc -Z 0 -h <host> -U <user> -w -d <database>
>>>>>
>>>>> On Wed, 17 Sept 2025 at 08:36, Adrian Klaver <
>>>>> [email protected]> wrote:
>>>>>
>>>>>> On 9/16/25 15:25, R Wahyudi wrote:
>>>>>> >
>>>>>> > I'm trying to troubleshoot the slowness issue with pg_restore and
>>>>>> > stumbled across a recent post about pg_restore scanning the whole
>>>>>> file :
>>>>>> >
>>>>>> >  > "scanning happens in a very inefficient way, with many seek
>>>>>> calls and
>>>>>> > small block reads. Try strace to see them. This initial phase can
>>>>>> take
>>>>>> > hours in a huge dump file, before even starting any actual
>>>>>> restoration."
>>>>>> > see :
>>>>>> https://www.postgresql.org/message-id/E48B611D-7D61-4575-A820-
>>>>>> > B2C3EC2E0551%40gmx.net <https://www.postgresql.org/message-id/
>>>>>> > E48B611D-7D61-4575-A820-B2C3EC2E0551%40gmx.net>
>>>>>>
>>>>>> This was for pg_dump output that was streamed to a Borg archive and
>>>>>> as
>>>>>> result had no object offsets in the TOC.
>>>>>>
>>>>>> How are you doing your pg_dump?
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Adrian Klaver
>>>>>> [email protected]
>>>>>>
>>>>>
>>>>
>>>> --
>>>> Death to <Redacted>, and butter sauce.
>>>> Don't boil me, I'm still alive.
>>>> <Redacted> lobster!
>>>>
>>>
>>
>> --
>> Death to <Redacted>, and butter sauce.
>> Don't boil me, I'm still alive.
>> <Redacted> lobster!
>>
>

-- 
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> lobster!

Reply via email to