Chris Miller wrote:
Something else I probably should have mentioned. Enabling the disk cache
does NOT reduce the memory required to perform the split, though it does
make multiple passes during the second stage much quicker, and the more
passes
that are used (via smaller --max-areas
L I don't know what change made it possible, but I finally succeeded to
L process all of North/South America with the latest splitter and 3.9
L GB heap space. I used the cache option and max-node=1.2 million. I've
L tried this a few times before with older splitter versions, but this
L is the
Chris Miller wrote:
Have you tried the .kml import? I checked this in a few days ago - you can
just pass in a .kml file instead of areas.list, and everything should behave
as it would with the areas.list file.
Not yet, but I certainly will and report back any success or failure.
Hi Chris,
I haven't tested the --cache parameter yet, but I have written some tools
that have dealt with the same problem.
On Wed, Aug 26, 2009 at 12:00:50AM +, Chris Miller wrote:
Hi Francois,
Have a go with splitter r77. It should now detect what a cache from a
previous
splitter
Hi Marko,
MM Are you caching the command line parameters and the file sizes and
MM time stamps of all input files? That should be rather safe. To be
MM even safer, you should perhaps also cache the splitter revision
MM number.
Yes I cache the file size, timestamp, and canonical path of each
Hi Francois,
Have a go with splitter r77. It should now detect what a cache from a previous
splitter run contains. It will then reuse or regenerate it as is appropriate
for the parameters you have provided to the current run. I've tested just
about every combination of --split-file, --cache,
I've just checked in some changes to the splitter that add a new --cache
parameter. This is designed to speed up the splitting process, especially
on large splits that require multiple passes over the .osm file, or in
situations
where you run the splitter several times on the same .osm file