Re: [OSM-talk] OSM Postgres table sizes

2010-06-25 Thread Chris Jones
On 24/06/10 19:44, John Smith wrote:
 On 25 June 2010 04:37, Richard Weait rich...@weait.com wrote:
   
 I'm not sure I understand your question.
 
 Over time, the overhead increases, not just the amount of data.
   
 lose some large tables.  But then you lose the ability to update
 unless you do a re-import.
 
 That's my question, how to eliminate overhead in the database without
 re-importing.
   

Any overhead is typically a percentage of the stored data for indexes
and such, you cant just magically get rid of it!

What you can do is vacuum the database to reorganise the on disk storage
at intervals, but I would suspect that's already happening.

--
Chris Jones, SUCS Admin
http://sucs.org

___
talk mailing list
talk@openstreetmap.org
http://lists.openstreetmap.org/listinfo/talk


Re: [OSM-talk] OSM Postgres table sizes

2010-06-25 Thread John Smith
On 25 June 2010 20:23, Chris Jones roller...@sucs.org wrote:
 Any overhead is typically a percentage of the stored data for indexes
 and such, you cant just magically get rid of it!

I was under the impression that it was incremental data from minutely updates...

___
talk mailing list
talk@openstreetmap.org
http://lists.openstreetmap.org/listinfo/talk


Re: [OSM-talk] OSM Postgres table sizes (was: Failed to download 9.5 GB planet)

2010-06-24 Thread Phil! Gold
* Juan Lucas Domínguez Rubio juan_lucas...@yahoo.com [2010-06-24 01:34 -0700]:
 Another question: after exporting the whole planet (recently) to
 Postgres, what is the size of the largest table created (which I presume
 will take up 80% of the whole DB)?

I can't speak for the whole planet.osm file (so this might be useless),
but I have (roughly) an extract of the United States.  The largest table,
planet_osm_ways, is 50 GB.  The next-largest table, planet_osm_nodes, is
21 GB.  After that is planet_osm_line at 8 GB.

-- 
...computer contrarian of the first order... / http://aperiodic.net/phil/
PGP: 026A27F2  print: D200 5BDB FC4B B24A 9248  9F7A 4322 2D22 026A 27F2
--- --
Last night I met upon the stair
A little man who wasn't there.
He wasn't there again today.
I think he's from the NSA!
 --- --

___
talk mailing list
talk@openstreetmap.org
http://lists.openstreetmap.org/listinfo/talk


Re: [OSM-talk] OSM Postgres table sizes (was: Failed to download 9.5 GB planet)

2010-06-24 Thread Richard Weait
On Thu, Jun 24, 2010 at 4:34 AM, Juan Lucas Domínguez Rubio
juan_lucas...@yahoo.com wrote:

 Hello, thanks.

 Solved. I think the problem was that I was downloading the file to a remote 
 disk (R: mapped to \\lanserver\data)

 Another question: after exporting the whole planet (recently) to Postgres, 
 what is the size of the largest table created (which I presume will take up 
 80% of the whole DB)?

based on my planet and minutely mapnik:

8 GB polygon
21 GB line
2 GB point
43 GB nodes
3 GB roads
50 GB ways
4 GB rels

overall disk use ~ 130 GB and growing about 2.5 GB/week at the moment.

___
talk mailing list
talk@openstreetmap.org
http://lists.openstreetmap.org/listinfo/talk


Re: [OSM-talk] OSM Postgres table sizes (was: Failed to download 9.5 GB planet)

2010-06-24 Thread John Smith
On 25 June 2010 00:28, Richard Weait rich...@weait.com wrote:
 overall disk use ~ 130 GB and growing about 2.5 GB/week at the moment.

Is there a way to reduce this overhead without re-importing?

___
talk mailing list
talk@openstreetmap.org
http://lists.openstreetmap.org/listinfo/talk


Re: [OSM-talk] OSM Postgres table sizes (was: Failed to download 9.5 GB planet)

2010-06-24 Thread Juan Lucas Domínguez Rubio


From: Richard Weait rich...@weait.com
Subject:
 Re: [OSM-talk] OSM Postgres table sizes (was: Failed to download 9.5 GB
 planet)
To: talk@openstreetmap.org
Date: Thursday, June 24, 2010,
 4:28 PM

On Thu, Jun 24, 2010 at 4:34 AM, 
Juan Lucas Domínguez Rubio
juan_lucas...@yahoo.com
 wrote:

 Hello, thanks.

 Solved. I think 
the problem was that I was downloading the file to a remote disk (R: 
mapped to \\lanserver\data)

 Another question: after 
exporting the whole planet (recently) to Postgres, what is the size of 
the largest table created (which I presume will take up 80% of the whole
 DB)?

based on my planet and minutely mapnik:

8 GB polygon
21
 GB line
2 GB point
43 GB nodes
3 GB roads
50 GB ways
4 
GB rels

overall disk use ~ 130 GB and growing about 2.5 GB/week 
at the moment.

___
talk
 mailing list
talk@openstreetmap.org
http://lists.openstreetmap.org/listinfo/talk


Hello, thanks.
That's much more than what I expected. With a small example, I obtained a 1:3 
ratio between the .osm format and the table size, so I estimated ~50 GB for the 
whole DB.

Regards,
Juan Lucas





  ___
talk mailing list
talk@openstreetmap.org
http://lists.openstreetmap.org/listinfo/talk


Re: [OSM-talk] OSM Postgres table sizes (was: Failed to download 9.5 GB planet)

2010-06-24 Thread Richard Weait
On Thu, Jun 24, 2010 at 10:39 AM, John Smith deltafoxtrot...@gmail.com wrote:
 On 25 June 2010 00:28, Richard Weait rich...@weait.com wrote:
 overall disk use ~ 130 GB and growing about 2.5 GB/week at the moment.

 Is there a way to reduce this overhead without re-importing?

I'm not sure I understand your question.

You can import a bounding box or extract and have smaller tables.

You can import without --slim, if you have the hardware for it, and
lose some large tables.  But then you lose the ability to update
unless you do a re-import.

Other alternatives?

___
talk mailing list
talk@openstreetmap.org
http://lists.openstreetmap.org/listinfo/talk


Re: [OSM-talk] OSM Postgres table sizes (was: Failed to download 9.5 GB planet)

2010-06-24 Thread John Smith
On 25 June 2010 04:37, Richard Weait rich...@weait.com wrote:
 I'm not sure I understand your question.

Over time, the overhead increases, not just the amount of data.

 You can import a bounding box or extract and have smaller tables.

 You can import without --slim, if you have the hardware for it, and

I didn't mean without the slim option.

 lose some large tables.  But then you lose the ability to update
 unless you do a re-import.

That's my question, how to eliminate overhead in the database without
re-importing.

___
talk mailing list
talk@openstreetmap.org
http://lists.openstreetmap.org/listinfo/talk