On Sat, 30 Mar 2024, 10:04 Alexander Farber, <alexander.far...@gmail.com>
wrote:

> Thank you, Justin -
>
> On Sat, Mar 30, 2024 at 4:33 AM Justin Clift <jus...@postgresql.org>
> wrote:
>
>> On 2024-03-30 05:53, Alexander Farber wrote:
>> > I use the following postgresql.conf in my Dockerfile
>> > ( the full version at https://stackoverflow.com/a/78243530/165071 ),
>> > when loading a 28 GByte large europe-latest.osm.pbf
>>
>> Not specific conf file improvements, but for an initial data load
>> have you done things like turning off fsync(), deferring index
>> creating until after the data load finishes, and that kind of thing?
>>
>
> I will try the following commands in my Dockerfile then
> and later report back on any improvements:
>
> RUN set -eux && \
>     pg_ctl init && \
>     echo "shared_buffers = 1GB"                >> $PGDATA/postgresql.conf
> && \
>     echo "work_mem = 50MB"                     >> $PGDATA/postgresql.conf
> && \
>     echo "maintenance_work_mem = 10GB"         >> $PGDATA/postgresql.conf
> && \
>     echo "autovacuum_work_mem = 2GB"           >> $PGDATA/postgresql.conf
> && \
>     echo "wal_level = minimal"                 >> $PGDATA/postgresql.conf
> && \
>     echo "checkpoint_timeout = 60min"          >> $PGDATA/postgresql.conf
> && \
>     echo "max_wal_size = 10GB"                 >> $PGDATA/postgresql.conf
> && \
>     echo "checkpoint_completion_target = 0.9"  >> $PGDATA/postgresql.conf
> && \
>     echo "max_wal_senders = 0"                 >> $PGDATA/postgresql.conf
> && \
>     echo "random_page_cost = 1.0"              >> $PGDATA/postgresql.conf
> && \
>     echo "password_encryption = scram-sha-256" >> $PGDATA/postgresql.conf
> && \
>     echo "fsync = off"                            >>
> $PGDATA/postgresql.conf && \
>     pg_ctl start && \
>     createuser --username=postgres $PGUSER && \
>     createdb --username=postgres --encoding=UTF8 --owner=$PGUSER
> $PGDATABASE && \
>     psql --username=postgres $PGDATABASE --command="ALTER USER $PGUSER
> WITH PASSWORD '$PGPASSWORD';" && \
>     psql --username=postgres $PGDATABASE --command='CREATE EXTENSION IF
> NOT EXISTS postgis;' && \
>     psql --username=postgres $PGDATABASE --command='CREATE EXTENSION IF
> NOT EXISTS hstore;' && \
>     osm2pgsql --username=$PGUSER --database=$PGDATABASE --create
> --cache=60000 --hstore --latlong /data/map.osm.pbf && \
>     rm -f /data/map.osm.pbf && \
>     pg_ctl stop && \
>     echo "fsync = on"                            >>
> $PGDATA/postgresql.conf && \
>     echo '# TYPE DATABASE USER ADDRESS METHOD'                >
> $PGDATA/pg_hba.conf && \
>     echo "local all postgres peer"                           >>
> $PGDATA/pg_hba.conf && \
>     echo "local $PGDATABASE $PGUSER           scram-sha-256" >>
> $PGDATA/pg_hba.conf && \
>     echo "host  $PGDATABASE $PGUSER 0.0.0.0/0 scram-sha-256" >>
> $PGDATA/pg_hba.conf
>
> The later fsync = on will override the former, right?
>
> Best regards
> Alex
>
>
>

2hrs sounds reasonable for Europe, it's a big place in terms of osm data
and osm2pgsql is doing processing to convert to geometry objects prior to
doing anything on the Postgresql side.
If you examine the --log--sql output for a small test country you can see
what it does in terms of the postgresql.
osm2pgsql gives options to trim the output to only what you need (so if you
don't want waterways, traffic features, parking places or places of worship
etc.. why load them)
Hopefully you have found the excellent geofabrik
https://download.geofabrik.de/ source for osm data.
Rather than load this data afresh each update cycle you would be better off
simply loading the changes so the .osc files or ... osm osmosis will create
the equivalent of a diff file for you
Looks like you are already using osm2psql's recommended postgresql.config
settings, I'd be surprised if this was way off.  Getting as close to tin
rather than virtual machines and containers will also help, lots of io
going on here.
If you are only interested in the geography you might consider geofabrik's
shapefile available for many countries, they have already done some of the
work for you.

Apologies if you are already a long way down this route & just asking about
the final stage of loading the osm2pgsql output to Postgresql but however
well you do here I would only expect small marginal gains.

Reply via email to