Like in any other case when somebody edits OSM data its the job of whoever is
doing the edit to make sure he does not destroy valuable information and is
actually improving the data thats in the OSM database. This has nothing to do
with distributed version control, because OSM is centralized. It just means
that when a new version of some data becomes available you have to do a proper
merge.

And yes, it would be nice to have tools for distributed version control of
geodata. For instance built upon git. But thats not the OSM way of doing
things. It has been proposed a few times, but nobody came up with a coherent
plan how this is supposed to work.

But I do see many cases where having a distributed versioning system of geodata
would be nice. It makes much sense for instance if you are not trying to create
"one description of the world" (as OSM is doing), but "many different
descriptions". Say you want to create a overview world map, just continents,
countries, a few big cities etc. Somebody else might want to use the same map,
but adds, say capital cities, even if they are small. Somebody else might do
other changes. If all of this is in different git repositories that are forked
from each other, changes in one map can migrate to others if the owners of
those other maps like them.

Jochen

On Wed, Jun 01, 2011 at 03:55:29PM +0200, Martijn van Exel wrote:
> Date: Wed, 1 Jun 2011 15:55:29 +0200
> From: Martijn van Exel <m...@rtijn.org>
> To: Talk Openstreetmap <talk@openstreetmap.org>
> Subject: [OSM-talk] How to deal with versioning
> 
> Hi all,
> Hi all,
> 
> This came up during an import discussion on talk-nl and I am curious
> about your thoughts and ideas.
> 
> Below is an edited down version of a question I posted on the GIS
> StackExchange[1]:
> 
> We currently deal with object versioning in a rudimentary way: each
> object has a integer version number, and only object with the highest
> version is exposed in the live database. The database uses optimistic
> locking, so users must resolve all conflicts that occur when uploading
> contributions manually.
> 
> This all works reasonably well as long as human contributions through
> the editors are the only mode of contribution - but they aren't.
> Increasingly, imports of open public sector data are conducted. These
> make for more complex versioning issues. Consider the following
> scenario:
> 
> 1    A building object is being imported from an open public sector dataset
> 3    The building receives some modifications by human contributors
> (attributes, geometry, or both)
> 3    A new version of the public sector data becomes available and is 
> imported.
> 
> Currently, in step 3. the human contributions would be lost, unless
> each building that received community modifications is manually merged
> with the new import.
> 
> How can we deal with this situation? Do we need to look at distributed
> version control in software development? How can methods of DVC be
> adapted to deal with distributed spatial data maintenance?
> 
> [1] 
> http://gis.stackexchange.com/questions/10493/how-to-deal-with-versioning-in-openstreetmap
> 
> -- 
> Martijn van Exel
> http://about.me/mvexel
> 
> _______________________________________________
> talk mailing list
> talk@openstreetmap.org
> http://lists.openstreetmap.org/listinfo/talk
> 

-- 
Jochen Topf  joc...@remote.org  http://www.remote.org/jochen/  +49-721-388298


_______________________________________________
talk mailing list
talk@openstreetmap.org
http://lists.openstreetmap.org/listinfo/talk

Reply via email to