Re: Please recommend postgresql.conf improvements for osm2pgsql loading Europe
Thank you for the insights
Re: Please recommend postgresql.conf improvements for osm2pgsql loading Europe
On Sat, 30 Mar 2024, 10:04 Alexander Farber, wrote: > Thank you, Justin - > > On Sat, Mar 30, 2024 at 4:33 AM Justin Clift > wrote: > >> On 2024-03-30 05:53, Alexander Farber wrote: >> > I use the following postgresql.conf in my Dockerfile >> > ( the full version at https://stackoverflow.com/a/78243530/165071 ), >> > when loading a 28 GByte large europe-latest.osm.pbf >> >> Not specific conf file improvements, but for an initial data load >> have you done things like turning off fsync(), deferring index >> creating until after the data load finishes, and that kind of thing? >> > > I will try the following commands in my Dockerfile then > and later report back on any improvements: > > RUN set -eux && \ > pg_ctl init && \ > echo "shared_buffers = 1GB">> $PGDATA/postgresql.conf > && \ > echo "work_mem = 50MB" >> $PGDATA/postgresql.conf > && \ > echo "maintenance_work_mem = 10GB" >> $PGDATA/postgresql.conf > && \ > echo "autovacuum_work_mem = 2GB" >> $PGDATA/postgresql.conf > && \ > echo "wal_level = minimal" >> $PGDATA/postgresql.conf > && \ > echo "checkpoint_timeout = 60min" >> $PGDATA/postgresql.conf > && \ > echo "max_wal_size = 10GB" >> $PGDATA/postgresql.conf > && \ > echo "checkpoint_completion_target = 0.9" >> $PGDATA/postgresql.conf > && \ > echo "max_wal_senders = 0" >> $PGDATA/postgresql.conf > && \ > echo "random_page_cost = 1.0" >> $PGDATA/postgresql.conf > && \ > echo "password_encryption = scram-sha-256" >> $PGDATA/postgresql.conf > && \ > echo "fsync = off">> > $PGDATA/postgresql.conf && \ > pg_ctl start && \ > createuser --username=postgres $PGUSER && \ > createdb --username=postgres --encoding=UTF8 --owner=$PGUSER > $PGDATABASE && \ > psql --username=postgres $PGDATABASE --command="ALTER USER $PGUSER > WITH PASSWORD '$PGPASSWORD';" && \ > psql --username=postgres $PGDATABASE --command='CREATE EXTENSION IF > NOT EXISTS postgis;' && \ > psql --username=postgres $PGDATABASE --command='CREATE EXTENSION IF > NOT EXISTS hstore;' && \ > osm2pgsql --username=$PGUSER --database=$PGDATABASE --create > --cache=6 --hstore --latlong /data/map.osm.pbf && \ > rm -f /data/map.osm.pbf && \ > pg_ctl stop && \ > echo "fsync = on">> > $PGDATA/postgresql.conf && \ > echo '# TYPE DATABASE USER ADDRESS METHOD'> > $PGDATA/pg_hba.conf && \ > echo "local all postgres peer" >> > $PGDATA/pg_hba.conf && \ > echo "local $PGDATABASE $PGUSER scram-sha-256" >> > $PGDATA/pg_hba.conf && \ > echo "host $PGDATABASE $PGUSER 0.0.0.0/0 scram-sha-256" >> > $PGDATA/pg_hba.conf > > The later fsync = on will override the former, right? > > Best regards > Alex > > > 2hrs sounds reasonable for Europe, it's a big place in terms of osm data and osm2pgsql is doing processing to convert to geometry objects prior to doing anything on the Postgresql side. If you examine the --log--sql output for a small test country you can see what it does in terms of the postgresql. osm2pgsql gives options to trim the output to only what you need (so if you don't want waterways, traffic features, parking places or places of worship etc.. why load them) Hopefully you have found the excellent geofabrik https://download.geofabrik.de/ source for osm data. Rather than load this data afresh each update cycle you would be better off simply loading the changes so the .osc files or ... osm osmosis will create the equivalent of a diff file for you Looks like you are already using osm2psql's recommended postgresql.config settings, I'd be surprised if this was way off. Getting as close to tin rather than virtual machines and containers will also help, lots of io going on here. If you are only interested in the geography you might consider geofabrik's shapefile available for many countries, they have already done some of the work for you. Apologies if you are already a long way down this route & just asking about the final stage of loading the osm2pgsql output to Postgresql but however well you do here I would only expect small marginal gains.
Re: Please recommend postgresql.conf improvements for osm2pgsql loading Europe
On 2024-03-31 04:07, Alexander Farber wrote: Turning fsync = off has resulted in no noticable build time reduction for my Dockerfile with OSM Europe data, but still thank you for the suggestion! No worries. :) With this import you're doing, is it something that will be repeated a lot with the exact same data set, or is this a once off thing? If it's something that'll be repeated a lot (maybe part of some automated process?), then it might be worth making a backup / snapshot / something of the database after the import has completed. With a backup or snapshot in place (depends on the storage you're using), you could potentially load things from that backup / snapshot (etc) instead of having to do the import all over again each time. Regards and best wishes, Justin Clift
Re: Please recommend postgresql.conf improvements for osm2pgsql loading Europe
Turning fsync = off has resulted in no noticable build time reduction for my Dockerfile with OSM Europe data, but still thank you for the suggestion! >
Re: Please recommend postgresql.conf improvements for osm2pgsql loading Europe
Thank you, Justin - On Sat, Mar 30, 2024 at 4:33 AM Justin Clift wrote: > On 2024-03-30 05:53, Alexander Farber wrote: > > I use the following postgresql.conf in my Dockerfile > > ( the full version at https://stackoverflow.com/a/78243530/165071 ), > > when loading a 28 GByte large europe-latest.osm.pbf > > Not specific conf file improvements, but for an initial data load > have you done things like turning off fsync(), deferring index > creating until after the data load finishes, and that kind of thing? > I will try the following commands in my Dockerfile then and later report back on any improvements: RUN set -eux && \ pg_ctl init && \ echo "shared_buffers = 1GB">> $PGDATA/postgresql.conf && \ echo "work_mem = 50MB" >> $PGDATA/postgresql.conf && \ echo "maintenance_work_mem = 10GB" >> $PGDATA/postgresql.conf && \ echo "autovacuum_work_mem = 2GB" >> $PGDATA/postgresql.conf && \ echo "wal_level = minimal" >> $PGDATA/postgresql.conf && \ echo "checkpoint_timeout = 60min" >> $PGDATA/postgresql.conf && \ echo "max_wal_size = 10GB" >> $PGDATA/postgresql.conf && \ echo "checkpoint_completion_target = 0.9" >> $PGDATA/postgresql.conf && \ echo "max_wal_senders = 0" >> $PGDATA/postgresql.conf && \ echo "random_page_cost = 1.0" >> $PGDATA/postgresql.conf && \ echo "password_encryption = scram-sha-256" >> $PGDATA/postgresql.conf && \ echo "fsync = off">> $PGDATA/postgresql.conf && \ pg_ctl start && \ createuser --username=postgres $PGUSER && \ createdb --username=postgres --encoding=UTF8 --owner=$PGUSER $PGDATABASE && \ psql --username=postgres $PGDATABASE --command="ALTER USER $PGUSER WITH PASSWORD '$PGPASSWORD';" && \ psql --username=postgres $PGDATABASE --command='CREATE EXTENSION IF NOT EXISTS postgis;' && \ psql --username=postgres $PGDATABASE --command='CREATE EXTENSION IF NOT EXISTS hstore;' && \ osm2pgsql --username=$PGUSER --database=$PGDATABASE --create --cache=6 --hstore --latlong /data/map.osm.pbf && \ rm -f /data/map.osm.pbf && \ pg_ctl stop && \ echo "fsync = on">> $PGDATA/postgresql.conf && \ echo '# TYPE DATABASE USER ADDRESS METHOD'> $PGDATA/pg_hba.conf && \ echo "local all postgres peer" >> $PGDATA/pg_hba.conf && \ echo "local $PGDATABASE $PGUSER scram-sha-256" >> $PGDATA/pg_hba.conf && \ echo "host $PGDATABASE $PGUSER 0.0.0.0/0 scram-sha-256" >> $PGDATA/pg_hba.conf The later fsync = on will override the former, right? Best regards Alex
Re: Please recommend postgresql.conf improvements for osm2pgsql loading Europe
On 2024-03-30 05:53, Alexander Farber wrote: I use the following postgresql.conf in my Dockerfile ( the full version at https://stackoverflow.com/a/78243530/165071 ), when loading a 28 GByte large europe-latest.osm.pbf Is anybody please able to spot any improvements I could apply to the postgresql.conf config values at the top of my mail, that could reduce the loading time of almost 2 hours? Not specific conf file improvements, but for an initial data load have you done things like turning off fsync(), deferring index creating until after the data load finishes, and that kind of thing? You don't want fsync() off when you're using the database in production, but for long data load scenarios it seems like it'd be a decent fit. With .pbf files, from skimming over how they're described here: https://wiki.openstreetmap.org/wiki/PBF_Format ... they don't seem to be optimised for loading into a database. (?) It kind of looks like they'd be stored into individual records, which probably means they'd be getting imported as individual INSERT statements rather than something that's optimised for bulk loading. :( Regards and best wishes, Justin Clift