Re: [OSM-dev-fr] Paratage de DEM ( était Héberger le rendu HOT?)

2013-06-27 Thread sly (sylvain letuffe)
Je transferts sur dev-fr histoire d'avoir plus d'avis

On jeudi 27 juin 2013, Yohan Boniface wrote:

 Côté calque hillshade, mon idée générale pour l'instant c'est d'utiliser 
 un .vrt qui référence les tif moulinés, apparemment c'est ce qui se fait.

Je ne travail pas sur le monde mais juste sur l'europe (780°²) et j'ai opté 
pour un seul et unique fichier tif (9Go)

N'ayant pas trop fait d'essais avec la solution d'un vrt et d'une grosse 
arborescence, je ne saurais dire si c'est mieux ou moins bien. Si ce n'est 
sans doute que ma technique ajoute une monstre opération à base de 
gdal_merge.py pour produire un fichier titan.

 En théorie, je devrais aussi il me semble pouvoir faire un 
 .vrt, mais pour l'instant j'échoue avec succès ;)

J'ai opté pour la technique qui consiste à mettre les isohypses dans la base 
postgresql :
http://wiki.openstreetmap.org/wiki/Contours

Encore une fois, je ne saurais dire si c'est mieux ou moins bien qu'un 
stockage+accès en shp 


Message complet :
On jeudi 27 juin 2013, Yohan Boniface wrote:
 En parlant de DEM, je suis en train de réfléchir à comment scaler le DEM 
 pour le rendu HOT, et j'ai encore pas mal d'interrogations sur la 
 meilleure stratégie.
 
 Voici en gros l'état de ma réflexion, si y a des avis, je suis preneur.
 
 D'abord un petit peu de contexte: je bosse sur un projet TileMill [1], 
 donc en Carto.
 Côté visualisations DEM, je me suis arrêté sur deux calques:
 - un calque hillshade avec une source à 80° d'altitude
 - un calque lignes de contour à 25m
 
 Pour l'instant, j'ai tout fait à partir d'une dalle CGIAR à 90m de 5x5 
 disponible sur leur site [2]. Donc c'est facile, j'ai un seul petit 
 fichier à gérer par calque, et chaque traitement prend 10 secondes.
 
 Mais bien sûr dans l'idée de passer ce rendu en worldwide, ça va pas 
 être aussi simple.
 
 Côté calque hillshade, mon idée générale pour l'instant c'est d'utiliser 
 un .vrt qui référence les tif moulinés, apparemment c'est ce qui se fait.
 Par contre, je sais pas quelle est le ratio optimal nombre de 
 tifs/taille de ces tifs. Y a 820 dalles 90m sur le serveur du CGIAR. 820 
 fichiers me semble *à vue de nez* pas super optimal.
 D'un autre côté, MapBox semble dire qu'ils ont constaté que des dalles 
 entre 4096x4096 et 16384x16384 donnaient les meilleurs résultats [3], et 
 les dalles de CGIAR font 6000 x 6000. Donc une option est de les merger 
 par groupe de 4 ou 6.
 
 Côté lignes de contour, j'étais parti pour merger les shp dans un gros 
 shp, mais pour l'instant ogr2ogr ne veut pas de mes shp lignes de 
 contour. En théorie, je devrais aussi il me semble pouvoir faire un 
 .vrt, mais pour l'instant j'échoue avec succès ;)
 
 Voilà l'état de mes réflexions. J'en suis à me dire que je vais 
 commencer par service le style sans les DEM, ou avec les DEMs seulement 
 dans les endroits stratégiques pour HOT, en ajoutant au fur et à mesure 
 ce qu'on me demande. Pour monter petit à petit en charge, et bien 
 prendre le temps de voir les intérêts des différentes options.
 
 Si vous avez des tuyaux, des expériences, des retours, je suis preneur. :)
 
 Je vais sûrement pinguer les anglois sur serving-tiles@ quand j'aurai 
 les idées un peu plus claires.
 
 En attendant je continue à lire la littérature des Internets sur le sujet!
 
 Yohan
 
 [1] https://github.com/hotosm/HDM-CartoCSS
 [2] http://srtm.csi.cgiar.org/SELECTION/inputCoord.asp
 [3] 



-- 
sly, DWG member since 11/2012
Coordinateur du groupe [ga]
http://wiki.openstreetmap.org/wiki/User:Sletuffe

___
dev-fr mailing list
dev-fr@openstreetmap.org
http://lists.openstreetmap.org/listinfo/dev-fr


[OSM-dev] disk size for planet osm import into PostGIS (on an SSD)?

2013-06-27 Thread Akos Maroy

Hi,

I'd like to inquire about the estimated disk size needed to import the 
planet osm file into PostGIS?



What I wanted to do is to import planet osm into a PostGIS that is based 
on a 512GB SSD (well, formatted and reported size: 459G, 480720592 
bytes). I did the import using osm2pgsql, using:


/usr/local/bin/osm2pgsql -d osm_world -s -C 5800 --hstore-all -K -v -G 
-m  planet-130620.osm.bz2



the end of the import is the following:

Completed planet_osm_roads
Creating osm_id index on  planet_osm_point
Creating indexes on  planet_osm_point finished
All indexes on  planet_osm_point created  in 10171s
Completed planet_osm_point
CREATE INDEX planet_osm_ways_nodes ON planet_osm_ways USING gin (nodes)  
WITH (FASTUPDATE=OFF);
 failed: ERROR:  could not extend file base/1602600/4948340.12: No 
space left on device

HINT:  Check free disk space.

Error occurred, cleaning up




of course I have more space on traditional HDDs, but my main point is to 
have the I/O intensive parts of the PostGIS database on the SSD. the OSM 
file to be imported is located on a traditional HDD (not on the SSD), 
and the osm2pgsql command is also issued from a prompt that resides on 
the HDD. the /tmp directory is not on the SSD either.


the SSD is formatted as ext4

would the planet OSM in general fit on such a disk?

are there optimization possibilities to decrease the required disk size 
for the import? maybe:


 * some temporary disk space is used during the import that could be
   moved from the SSD to the HDD?
 * some not I/O specific parts of the database files could be relocated
   to the HDD (and symlinked)?
 * some parts of the database could be omitted (like, I don't need the
   history, etc.)
 * maybe some tune2fs magic?


any pointers, ideas, etc. welcome,


Akos

___
dev mailing list
dev@openstreetmap.org
http://lists.openstreetmap.org/listinfo/dev


Re: [OSM-dev] disk size for planet osm import into PostGIS (on an SSD)?

2013-06-27 Thread Frederik Ramm

Hi,

On 06/27/2013 09:33 AM, Akos Maroy wrote:

I'd like to inquire about the estimated disk size needed to import the
planet osm file into PostGIS?


320 GB on a machine I am running.


/usr/local/bin/osm2pgsql -d osm_world -s -C 5800 --hstore-all -K -v -G
-m  planet-130620.osm.bz2


I use --flat-nodes which saves more than 50 GB and is quicker. I don't 
use -K, and I don't use --hstore-all either; both will certainly blow up 
the space needed. I recommend using a shape file for coastlines like 
everyone else does, and to have a look at --hstore-match-only (you 
really don't need stuff *twice* in your database) and use a stripped 
style.xml that ensures you don't store tons and tons of note and 
source tags and other import side products. You don't want to burden 
your SSD with tags like gnis:Class, NHD:FType, tiger:PCICBSA, or 
canvec:UUID ;)


Bye
Frederik

--
Frederik Ramm  ##  eMail frede...@remote.org  ##  N49°00'09 E008°23'33

___
dev mailing list
dev@openstreetmap.org
http://lists.openstreetmap.org/listinfo/dev


Re: [OSM-dev] disk size for planet osm import into PostGIS (on an SSD)?

2013-06-27 Thread Paul Norman
 From: Akos Maroy [mailto:a...@maroy.hu] 
 Subject: [OSM-dev] disk size for planet osm import into PostGIS (on an
SSD)?
 
 Hi,
 
 I'd like to inquire about the estimated disk size needed to import the
planet osm file into PostGIS?

For osm2pgsql about 240GB for the database plus 17GB for flat-nodes is my
recollection for the final size from my last tests. 
 
 What I wanted to do is to import planet osm into a PostGIS that is based
on a 512GB SSD (well, formatted and reported size: 459G, 480720592 bytes). I
did the import using osm2pgsql, using:
 
 /usr/local/bin/osm2pgsql -d osm_world -s -C 5800 --hstore-all -K -v -G -m
planet-130620.osm.bz2
 
 the end of the import is the following:
 
 ...
  failed: ERROR:  could not extend file base/1602600/4948340.12: No space
left on device
 HINT:  Check free disk space.
 
 Error occurred, cleaning up

There are a couple of ways to reduce the disk space. The first is to use
flat-nodes. This turns what was a 80GB+ nodes table for a full planet into a
17GB flat file.

One other optimization is if you're not planning on doing updates and don't
need the slim tables you can use --drop to get rid of them.

By default osm2pgsql does indexing and clustering of the tables in parallel.
This is fastest, but it results in a big spike of disk usage while
rearranging and indexing as it happens on all the tables at once. I believe
--disable-parallel-indexing will fix this.

I am quite surprised you ran into problems on a 512GB SSD. I've imported
recent planets on smaller volumes. On the other hand, I don't know anyone
who's done a full planet import without --flat-nodes lately and that
probably helps lots.

Two other tips for the next time you try are to use the .osm.pbf file
instead of .osm.bz2, and that there was a new planet file generated about
two hours ago.

Another general tip is that if your planet file is more than a day or so
old, use osmupdate (https://wiki.openstreetmap.org/wiki/Osmupdate) to update
your planet file before importing. It only takes about an hour even if it's
a week old, and that's way faster than importing diffs after.

Because osm2pgsql has *so* many options the help text will now suggest a
reasonable command for importing data, see
https://github.com/openstreetmap/osm2pgsql/commit/34a30092d6




___
dev mailing list
dev@openstreetmap.org
http://lists.openstreetmap.org/listinfo/dev


Re: [OSM-dev] disk size for planet osm import into PostGIS (on an SSD)?

2013-06-27 Thread Akos Maroy

Frederick,




I'd like to inquire about the estimated disk size needed to import the
planet osm file into PostGIS?


320 GB on a machine I am running.

reassuring to hear :)



/usr/local/bin/osm2pgsql -d osm_world -s -C 5800 --hstore-all -K -v -G
-m  planet-130620.osm.bz2


I use --flat-nodes which saves more than 50 GB and is quicker. I don't 
use -K, and I don't use --hstore-all either; both will certainly blow 
up the space needed. I recommend using a shape file for coastlines 
like everyone else does, and to have a look at --hstore-match-only 

would this be --hstore --hstore-match-only ?
(you really don't need stuff *twice* in your database) and use a 
stripped style.xml that ensures you don't store tons and tons of 
note and source tags and other import side products. You don't 
want to burden your SSD with tags like gnis:Class, NHD:FType, 
tiger:PCICBSA, or canvec:UUID ;)

indeed

thanks for the ideas


Akos


___
dev mailing list
dev@openstreetmap.org
http://lists.openstreetmap.org/listinfo/dev


Re: [OSM-dev] disk size for planet osm import into PostGIS (on an SSD)?

2013-06-27 Thread Akos Maroy

Paul,



There are a couple of ways to reduce the disk space. The first is to use
flat-nodes. This turns what was a 80GB+ nodes table for a full planet into a
17GB flat file.

One other optimization is if you're not planning on doing updates and don't
need the slim tables you can use --drop to get rid of them.

By default osm2pgsql does indexing and clustering of the tables in parallel.
This is fastest, but it results in a big spike of disk usage while
rearranging and indexing as it happens on all the tables at once. I believe
--disable-parallel-indexing will fix this.
thanks, trying --flat-nodes flat-nodes --hstore --hstore-match-only 
--disable-parallel-indexing  now


I am quite surprised you ran into problems on a 512GB SSD. I've imported
recent planets on smaller volumes. On the other hand, I don't know anyone
who's done a full planet import without --flat-nodes lately and that
probably helps lots.

Two other tips for the next time you try are to use the .osm.pbf file
instead of .osm.bz2, and that there was a new planet file generated about
two hours ago.

the .bz2 file seems to work fine with bzcat


Another general tip is that if your planet file is more than a day or so
old, use osmupdate (https://wiki.openstreetmap.org/wiki/Osmupdate) to update
your planet file before importing. It only takes about an hour even if it's
a week old, and that's way faster than importing diffs after.

will look into it, thanks


Akos


___
dev mailing list
dev@openstreetmap.org
http://lists.openstreetmap.org/listinfo/dev


Re: [OSM-dev] disk size for planet osm import into PostGIS (on an SSD)?

2013-06-27 Thread Sven Geggus
Frederik Ramm frede...@remote.org wrote:

 and use a stripped style.xml 

I assume you meant osm2pgsql style.

 that ensures you don't store tons and tons of note and source tags and
 other import side products. You don't want to burden your SSD with tags
 like gnis:Class, NHD:FType, tiger:PCICBSA, or canvec:UUID ;)

Here is the one we use on the german tileserver:
http://svn.openstreetmap.org/applications/rendering/mapnik-german/views/default.style

Sven

-- 
Das Internet ist kein rechtsfreier Raum, das Internet ist aber auch
kein bürgerrechtsfreier Raum. (Wolfgang Wieland Bündnis 90/Die Grünen)

/me is giggls@ircnet, http://sven.gegg.us/ on the Web

___
dev mailing list
dev@openstreetmap.org
http://lists.openstreetmap.org/listinfo/dev


Re: [josm-dev] Tabular edit of name tags

2013-06-27 Thread Hans Schmidt
Am 27.06.2013 04:04, schrieb Nicolás Alvarez:
 Right, but Vladimir Putin (Владимир Путин) with the parentheses and
 all is *not* his name. That is what people are objecting: putting both
 versions of the street name in the name tag just for the sake of
 rendering.

I don’t like it either, but in most (at least Asian) countries this is
done regularly like this. But I am not talking about the name=... tag,
but about the name:xx=... tags. Nobody excepts to put 10 languages in
the plain name tag. For that, name:xx are there.

But I guess this issue has deviated too much from my original mail.
___
josm-dev mailing list
josm-dev@openstreetmap.org
http://lists.openstreetmap.org/listinfo/josm-dev