On 9/14/2010 11:43 AM, Bill Thoen wrote:
  Steve,

Adding viewsheds to the package would certainly up the computing costs;
I was wondering if you had a limit to what sort of processing power
you've got there. ;-)

It is not unlimited, so part of the problem that is interesting to me is how to find and compute economical way to do it.

I also think what you're proposing might be interesting, but you have to
be careful about what conclusions you can draw from it. At what point
does the cost due to gradient variations become insignificant to the
overall cost of a route for a particular type of vehicle? For a trucker
on an interstate highway it doesn't signify because the statistical
noise of factors such high speeds and short driving time balanced
against the higher price of fuel, services and road freight taxes
completely overwhelms the cost factor contributed by the change in
gradients. So in those cases you'd be computing numbers but not saying
anything.

Agreed, doing anything for the trucking industry that would be useful probably requires a lot more understanding of the industry and regulations required for that. Luckily it is not my main focus :)

A different scenario, where gradient /is/ a significant factor, would be
a three-day 100 mile bike ride event through the mountains (like the
'Ride the Rockies' event they hold around here every year.) The power
that bicyclists can produce is so low that speeds and endurance are
strongly affected by grades. But a bicyclist doesn't typically operate
on the scale of the nation so applying the calculations to the entire
TIGER file is overkill. Also, the bicyclist operates on such a large
scale that the source data you're using to calculate gradient (30m DEM)
may be too coarse to be reliable on the bicyclist's scale.

Right, these points are all valid and have crossed my mind at one point or another. Applying this to the Tiger data set is not that big of a deal. I already have the Tiger data in XYZ so computing grades is not that difficult. Another reason for applying it to the whole data set is to build a web portal with US coverage. Granted any single route will not have continental scope, but individual routes might be anywhere on the continent.

I'm not saying it isn't worth doing, I'm just saying you'll need to
qualify the precision of your results before you can say much about
applying this to any real-world problems.

I'll post a link back if I get anything working. Meanwhile, thanks for the ideas and thoughts.

-Steve

- Bill Thoen


On 9/13/2010 5:28 PM, Stephen Woodbridge wrote:
Bill,

Thanks for the ideas. I might try to do something with the viewshed
idea in the future. It would need a LOT of computing to process all
the road segments in a National dataset like Tiger.

But for now I would like to figure out the routing costs.

One idea I had was to compute the grade for a segment and then compute
cost as:

cost = (time or distance) * scalefactor * max(abs(grade), 1.0)

This would have the effect of causing segments with a lot of grade to
have a higher cost of traversal.

Or similarly, if you want to pick roads with a lot of elevation
changes then use cost factor like:

cost = (time or distance) * scalefactor /
abs(sum_elevation_changes_over_the_segment)

This would have the effect of decreasing the traversal cost for
segments that have a lot of elevation changes.

These are pretty crude estimates and probably would need some fine
tuning to get reasonable results.

Thanks,
-Steve W

On 9/13/2010 4:24 PM, Bill Thoen wrote:
Stephen Woodbridge wrote:
Hi all,

(This is cross posting from the pgrouting list, sorry for the dups.)

I have preprocessed some shapefile data and added elevation
information in the Z value of the coordinates. I'm wondering how to
best utilize that in routes and would like any thoughts or ideas you
might be willing to share.

The obvious answer is to wrap the elevation data into the cost values
as this is simple and straight forward and does not require code
changes. This brings me to what have other people done or thought
about doing in this regard?
Since you seem to enjoy large database problems, have you considered
loading the DEM data together with the roads and sample the viewshed
every few km? You could then create an objective cost factor for
"scenic," proportional to the amount of land visible, with some
adjusting factor that distinguishes morphology, land cover, or other
weighted factors from each sample point. Creating a scale of "scenic"
and "picturesque" as it goes form "ho-hum flatland" to "precipitous,
brake-burning, wheel-gripping adventurous" might be fun all by itself.

If you're looking for 3D ideas, there's a GIS consulting company across
the hall from me that specializes in 3D information, visualization and
analysis, and I know they are working on web services to deliver the
sort of data that an application like yours would consume. Their website
is full of 3D imagery, articles and examples that you might want to
check out for ideas or inspiration There's a particularly good
demonstration of using fog instead of shadow to create a visual
representation of ridge lines, if your 're using those to determine a
topographic index (see http://ctmap.com/serendipity/index.php).

*Bill Thoen*
GISnet - www.gisnet.com <http://www.gisnet.com/>
1401 Walnut St., Suite C
Boulder, CO 80302
303-786-9961 tel
303-443-4856 fax
bth...@gisnet.com

_______________________________________________
Discuss mailing list
Discuss@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/discuss

_______________________________________________
Discuss mailing list
Discuss@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/discuss


--
*Bill Thoen*
GISnet - www.gisnet.com
303-786-9961



_______________________________________________
Discuss mailing list
Discuss@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/discuss

_______________________________________________
Discuss mailing list
Discuss@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/discuss

Reply via email to