Dear all, As you probably know, the conversation about machine learning techniques and their use for OpenStreetMap has been very emotional in our community. Opinions range from the potential negative impacts this could have, to the hope that it would significantly improve the quality and also the speed of OSM mapping, because it allows people to focus on what they do best.
We want to take an evidence-based look at the effects of machine learning mapping on OpenStreetMap. To do this we are working together with several organizations (German Geoscience Research Center, University of Heidelberg and the OpenStreetMap humanitarian team) to conduct research that will quantify the measurable impact of the currently proposed mapping workflow. We believe that a reproducible and transparent study will give us a clue. We are planning to do an experiment comparing four different datasets from the same area: * Reference data: Well mapped OpenStreetMap (over a longer period of time) * Conventional remote mapping data from OpenStreetMap (single mapping event) * Machine learning assisted remote mapping data with RapID (single mapping event) * Data created by the latest generation of AI models (without any editing) And we want to look at the indicators around it: * Quality: descriptive analysis of objects * The speed of mapping We want all the data and workflows we produce to be as reusable as possible. For this reason, all data from this experiment will be open and transparent. Please contact us if you are interested in any further analysis, we are happy to hear your suggestions before we start, so that we can ensure that all the raw data are as useful and correct as possible. In any case, we will keep you informed here. Best wishes, Felix PD: There is a related blog post: https://www.hotosm.org/updates/how-we-measure-the-effects-of-ai-assisted-mapping/ -- machine-learning mailing list [email protected] https://lists.openstreetmap.org/listinfo/machine-learning
