W dniu 10.03.2015 3:52, Alex Barth napisał(a):
Casting the net a little wider:

What do you think are the big topics and challenges for OpenStreetMap
as we're about to go into the second decade? What does this mean for
State of the Map?

I'm not sure what does this mean for SotM (?), but here are my thoughts on strategic topics as I see them being the active OSM community member from Poland. They are not very outstanding and visionary probably, but IMHO they are important and it can easily take few years - up to decade - to achieve these goals nevertheless... =}

TL;DR summary:

1. More synergy between (sub)projets
2. Managing data overload
   2a. Semi-automated tasks execution
   2b. Continuous sanity check tools
   2c. Big data analysis
3. Tools for personalizing OSM data presentation
4. Redesigning some key tagging schemes
5. Cooperation with external projects (especially open, like Wikidata)

***

And now that "TL" part alone:

1. More synergy between (sub)projets

OSM is acting in a highly decentralized way. I think this is healthy in general, but because the project is now on the road to be "The Map" (much like the Wikipedia is "The Encyclopedia" now), this results in growing inefficiency and inertia. Of course we can't avoid having some departments, just because the OSM is getting bigger and people tend to focus on their favorite activity, but nobody seems to care about the OSM output as a whole. For example you can talk to death every detail of tagging some esoteric features while completely ignoring how will it affect the rendering, routing or usability for mappers (for example how to tell the difference between tags A and B).

While it's a general issue and can't be resolved once and for all, I think connecting more dots inside the project is very important factor of reaching "The Map" level.

2. Managing data overload

We have so many data these days! It's a blessing and a curse. There are still many remote places where nothing is mapped - sure! But now we know how to do it, we have the tools to do it and we can help local volunteers to start (one of our most active member is helping to develop mapping in Nepal, another one is fond of Kyrgyzstan mountains).

But what to do with big, dense cities, where we have the landuses, the buildings, the streets and all the other "micromappping" things - plus the 3D layers, indoor mapping, underground facilities, etc. squeezed together? It will be increasingly hard to work with them - and new users will be those who will suffer the most, because it's easier to damage something while trying to add nice shiny object than to really extend the map.

Maybe we should split the data into "sets" or just make the iD and JOSM more layer/theme-centric tools - I don't know yet, but the problem is here to stay.

2a. Semi-automated tasks execution

One of the things we should really start to practice is to rely more on the automation. Let me quote:

"Try not to let humans do what machines could do instead. As a rule of thumb, automating a common task is worth at least ten times the effort a developer would spend doing that task manually one time. For very frequent or very complex tasks, that ratio could easily go up to twenty or even higher."

[ http://www.producingoss.com/en/managing-volunteers.html#automation ]

As an example - we are just starting to use semi-automated script for updating public transport routes in Warsaw. There are over 300 lines here, many hundreds of bus/tram stops - and they are constantly changing, of course. one by one. When I got interested in it, we had a dedicated Wikiproject, but it was on hiatus by then and I quickly gave up too, because tracking so many objects in Wiki was a tedious and not very useful task. Once we learned that local public transport operator (called ZTM) has started giving away their precious raw data with coordinates (!), me and few other mappers started to add all the stops focusing on "stop_position" tagging. Now, after almost a year of work, we have all of them and the C++ script which creates updated routes network in about 15 minutes. Two important things happened lately regarding transport system in my hometown - one of the bridges was destroyed by fire and a new subway line was opened. If we did everything by hand, we would be dead now, because too many things would change too fast - but the script, even as rough as it is now, has no problem even if all the lines will change at once. We have only to feed it with new stops (when they are created) and we're done!

However - beware of "botocracy"! If there's only one person able to use the tool, it's not sustainable model. It's important that the script only gives the .osm file on the output and we can easily handle injecting it into the database once a day - or at any other time rate. We have a chance to make manual fixes here and there if needed. Too much automation is not the answer - just let us get rid of boring details!

2b. Continuous sanity check tools

Such tools as this public transportation updating script need to be run continuously - be it once per day or whatever. And we need some more tools to monitor the whole service and data. Sure, we can't predict everything (see this message:

https://lists.openstreetmap.org/pipermail/talk/2015-March/072273.html ),

but we can have a "control center" for managing all these bots. The more data we have, the bigger is our project, the more we will need also this kind of automation. And it should also be open to the public to avoid "hit by bus" kind of situations.

2c. Big data analysis

We should not only collect more data, but also analyze and aggregate some of them. Probably some data analysis software exist yet, which can be used with our database more or less directly, but now it looks like we care only for editing and maps rendering (and sometimes printing them too) - let's imagine some other outputs in scope of OSM!

3. Tools for personalizing OSM data presentation

What we have now is 5 different map styles on the main page to choose, but people will need more and more personal styles (AKA "data skins"). Rendering all of them on OSM servers is probably impossible, but we can develop (just-in-time?) client rendering interfaces for our database. It has to be easy to use, like switching on and off visual layers and letting people choose the crazy colors, icons and the interesting areas in their browser just by clicking.

We can still serve some basic CSS skins/prestes via our repository - like "map for the developers" (I would like to see every POI, street lamp and the pipelines too) or "interactive contour map for quiz/web page" (two of my friends was asking me exactly about it for their work!) - but the rendering should be on their side. They can share some elements to avoid duplication - P2P in the background can be of some use here or even "CloudMapping". We should use people's machines much more. I still miss the Tiles@Home project, but that was still old static one-size-fits-all concept. In the next decade users can make a dynamic cloud sharing common resources, but dynamically creating and compiling personal "forks".

4. Redesigning some key tagging schemes

I think that will be one of the hardest think to change, but while tag crafting is mostly a grassroot process, we need to rethink some of them in a more systematic way.

For example amenity=school should be really landuse=school (if not used just for the building), landcover namespace should arise (so on the landuse=park we can see green space only when there's a grass actually, not on the whole this area), maybe some nature/man_made tagging should be replaced by terrain namespace... That's not important what exactly should be (re)designed from top to bottom this time, but once you have the needed level of expertise, you can make new implementation better instead of just patching the original one.

We also have a lot detailed objects which are not always clearly defined and we should try more "cascading" approach, like "amenity=fast_food" => "amenity=food+amenity_food_type=fast_food" (or something alike). That way we can have "Here is food!" label without forcing mapper to distinguish if he's not really sure.

I expect there will be strong reaction against using "top-down committee" methodology, but some well-known problems with our ontology architecture will never go away if we try to change it tag-for-tag. Of course that is true only for this class of problems - most new schemes will still be best when created ad hoc and then used by more and more mappers.

5. Cooperation with external projects (especially open, like Wikidata)

I remember when Wikipedia was afraid of using maps from OSM, because that was an external project. But if the license terms are not prohibitive and you know this other community works with the same principles in mind as your own, it would be a shame not to use their resources just because we don't control it fully. Recently I also heard some voices regarding using Wikidata in OSM, so for me it's the same story retold from the other side. =}

OSM is mapping/GIS community and is defined by this. It should not try to reinvent everything not to loose the focus. If somebody wants to try something different, she's free to join those "other" projects as well! And all those projects can cooperate to share what can be shared.

***

Well, that is what comes to my mind when thinking about strategic visions for next few years. Most of the times I just try to scratch my own little itch day by day, but after few years in the project I also have some long-time expectations and ideas. They can be not accurate or - heaven forbid! - the best ones, but remember: that was YOU who asked me about "big topics and challenges for OpenStreetMap"! ;-}}}

--
Piaseczno Miasto Wąskotorowe

_______________________________________________
talk mailing list
talk@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk

Reply via email to