Re: [OSM-dev] Osm2pgsql failure with low-end server
Martijn van Oosterhout klep...@gmail.com writes: On 8 December 2011 11:13, Nick Whitelegg nick.whitel...@solent.ac.uk wrote: 700 MB is a tiny machine, but then the Finland data set isn't that large either... it must be possible somehow ;) Incidentally why would it run out of memory in slim mode? I had the same problem too some time ago when trying to import the whole of England, back in the days before the county extracts. I did some research into this a while back and it has to do with the code that goes over all pending ways after the import, to deal with polygons. It looks like the number of pending ways is a lot more than it used to be. Osm2pgsql requests all the pending ways in a single query which fails on small machines. Can someone check if there is really the case. i.e. show the number of pending ways after a simple import. With this command line: osm2pgsql --create --database GIS --slim --cache 128 great_britain.osm I get this output: osm2pgsql great_britain.osm Reading in file: great_britain.osm Processing: Node(32899k) Way(4022k) Relation(80855) parse time: 3481s Node stats: total(32899023), max(1541436207) Way stats: total(4022307), max(140763013) Relation stats: total(80855), max(1905258) Going over pending ways processing way (1639k) Going over pending relations node cache: stored: 8047817(24.46%), storage efficiency: 47.97%, hit rate: 25.52% ... osm2pgsql SVN version 0.70.5 osm2pgsql great_britain.osm This runs on a VPS with ~512MB RAM (I think that this is the guaranteed level but I have certainly used ~50% more at times). -- Andrew. -- Andrew M. Bishop a...@gedanken.demon.co.uk http://www.gedanken.org.uk/mapping/ ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Osm2pgsql failure with low-end server
On 8 December 2011 11:13, Nick Whitelegg nick.whitel...@solent.ac.uk wrote: 700 MB is a tiny machine, but then the Finland data set isn't that large either... it must be possible somehow ;) Incidentally why would it run out of memory in slim mode? I had the same problem too some time ago when trying to import the whole of England, back in the days before the county extracts. I did some research into this a while back and it has to do with the code that goes over all pending ways after the import, to deal with polygons. It looks like the number of pending ways is a lot more than it used to be. Osm2pgsql requests all the pending ways in a single query which fails on small machines. Can someone check if there is really the case. i.e. show the number of pending ways after a simple import. And my final question was: why do polygons need post-processing after a normal import. I can't remember the reason... If they don't then we can solve the problem even quicker Have a nice day, -- Martijn van Oosterhout klep...@gmail.com http://svana.org/kleptog/ ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Osm2pgsql failure with low-end server
Martijn, On 12/11/2011 02:31 PM, Martijn van Oosterhout wrote: And my final question was: why do polygons need post-processing after a normal import. I can't remember the reason... If they don't then we can solve the problem even quicker I would neither say the *need* postprocessing, nor am I the author of the current multipolygon handling, but the reasoning is as follows: 1. I import a closed way tagged landuse=wood and create a polygon. 2. I import a closed way tagged natural=water and create a polygon. 3. I import a relation that has the first way as outer and the second as inner. I now either have to remove the polygon from 1, replacing it with one that has a hole defined by 2, or else I have to delay polygon creation until after all relations are read. Bye Frederik -- Frederik Ramm ## eMail frede...@remote.org ## N49°00'09 E008°23'33 ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Osm2pgsql failure with low-end server
Hi, On 12/08/2011 06:57 AM, Jukka Rahkonen wrote: processing way (68k) at 0.46k/sWARNING: terminating connection because of crash of another server process DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. Does this correspond to some kind of OOM killer message in your dmesg? Bye Frederik ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Osm2pgsql failure with low-end server
Frederik Ramm wrote: Hi, On 12/08/2011 06:57 AM, Jukka Rahkonen wrote: processing way (68k) at 0.46k/sWARNING: terminating connection because of crash of another server process DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. Does this correspond to some kind of OOM killer message in your dmesg? Sorry, I am just a poor end user and total beginner with Linux. But command dmesg does really show on the last line of listing: Out of memory: kill process 26675 (postgres) score 16795 or a child. -Jukka- ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Osm2pgsql failure with low-end server
Hi, On 12/08/2011 09:54 AM, Jukka Rahkonen wrote: processing way (68k) at 0.46k/sWARNING: terminating connection because of crash of another server process DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. Does this correspond to some kind of OOM killer message in your dmesg? Sorry, I am just a poor end user and total beginner with Linux. But command dmesg does really show on the last line of listing: Out of memory: kill process 26675 (postgres) score 16795 or a child. OK, so the kernel has killed the postgres process due to a memory problem. You can alleviate that by adding a little swap space to your system (which I guess you might not have, or have too little of). That means, in poor end user terms, that instead of killing a process, your system will just become slower when it reaches the memory limit. It might also be possible to trim the postgres config to use less memory, and/or make sure that you don't have unnecessary, memory-consuming programs running. Maybe you can stop some things for the duration of the import. The command ps auxw --sort vsz should list currently running processes, those with the largest memory consumption last. 700 MB is a tiny machine, but then the Finland data set isn't that large either... it must be possible somehow ;) Bye Frederik ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Osm2pgsql failure with low-end server
700 MB is a tiny machine, but then the Finland data set isn't that large either... it must be possible somehow ;) Incidentally why would it run out of memory in slim mode? I had the same problem too some time ago when trying to import the whole of England, back in the days before the county extracts. Nick ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Osm2pgsql failure with low-end server
Hi, I had a new try with importing Finnish osm excerpt with brand new osm2pgsql into PostgreSQL 8.4 on a very modest Linux server (700 Mb of memory). Import seems to be much faster in the beginning. It breaks still at the same place than with older osm2pgsql. Error seems to be something general server error. All that can be done in this case is to start again. Sooner or later it is going through but with this server I like more to create a PG dump file with another computer and install it with PG restore. It never fails. osm2pgsql -d gis -p osm2 -P 5432 -s -k -E 3067 --drop finland.osm.bz2 osm2pgsql SVN version 0.80.0 (32bit id space) Using projection SRS 3067 (EPSG:3067) Setting up table: osm2_point NOTICE: table osm2_point does not exist, skipping NOTICE: table osm2_point_tmp does not exist, skipping Setting up table: osm2_line NOTICE: table osm2_line does not exist, skipping NOTICE: table osm2_line_tmp does not exist, skipping Setting up table: osm2_polygon NOTICE: table osm2_polygon does not exist, skipping NOTICE: table osm2_polygon_tmp does not exist, skipping Setting up table: osm2_roads NOTICE: table osm2_roads does not exist, skipping NOTICE: table osm2_roads_tmp does not exist, skipping Allocating memory for dense node cache Allocating dense node cache in block sized chunks Node-cache: cache=800MB, maxblocks=0*102401, allocation method=8192 Mid: pgsql, scale=100 Setting up table: osm2_nodes NOTICE: table osm2_nodes does not exist, skipping NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index osm2_nodes_pkey for table osm2_nodes Setting up table: osm2_ways NOTICE: table osm2_ways does not exist, skipping NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index osm2_ways_pkey for table osm2_ways Setting up table: osm2_rels NOTICE: table osm2_rels does not exist, skipping NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index osm2_rels_pkey for table osm2_rels Reading in file: finland.osm.bz2 Processing: Node(5020k 31.8k/s) Way(0k 0.00k/s) Relation(0 0.00/s)WARNING: Found Out of order node 727762854 (34265137,934) - this will impact the cache efficiency Processing: Node(10558k 30.0k/s) Way(887k 1.97k/s) Relation(10548 32.36/s) parse time: 1129s Node stats: total(10558318), max(1534413174) in 352s Way stats: total(887645), max(140040529) in 451s Relation stats: total(10548), max(1889635) in 326s Committing transaction for osm2_point Committing transaction for osm2_line Committing transaction for osm2_polygon Committing transaction for osm2_roads Going over pending ways Using 1 helper-processes processing way (68k) at 0.46k/sWARNING: terminating connection because of crash of another server process DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. HINT: In a moment you should be able to reconnect to the database and repeat your command. way_done failed: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. (7) Arguments were: 32041909, Error occurred, cleaning up -Jukka Rahkonen- ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev