On 2015-05-02 23:28, Frederik Ramm wrote:
We collect observations.
...
There is no way for the mapper on the ground to know that the name on the building "should" be something else.
I think that sounds rather disingenuous. We humans are perfectly capable of correctly interpreting data which contains errors, and recognising what the error is. And there are plenty of types of information in OSM which are not (easily) verifiable on the ground - admin boundaries spring to mind. The important thing in my mind is that the information should be independently verifiable from publicly accessible (and appropriately licensed) sources, thus making the information objective. Of course the signs on the ground come into that category, but they are not necessarily superior to other valid sources.
There are plenty of spelling and grammatical mistakes on public signs, and although we are not the world's signage police, we should not be in the business of propagating obvious errors either.
You mentioned "quality" in another post; that implies "the extent of adherence to agreed criteria" it's a problem that we cannot yet measure the quality of our data because there is no consensus on what is "good" and what is not. That's why these discussions go round and round and round for a couple of weeks and then die off. There seems to be little motivation or drive to reach a clear conclusion. We don't even manage to work out *how* to determine what is "good". It's time we grew the balls we need to have the very painful talk about good data vs. bad data, followed by finding the right balance between quality and quantity. Quality itself can be subjective. What's fit for my purpose may break the data's usability for yours. And yet there is only one OSM data set. What are we going to agree to put in there, to keep the majority of people "happy"? What is our shared definition of quality?
//colin _______________________________________________ talk mailing list talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/talk