To build on Jean- Marc's point, one thing I raised at the HOT Summit and also 
recently to the London Missing Maps team is the need to tackle the errors at 
the source. Having validators is vital, but I believe we can improve the 
initial mapping through a few tweaks in the way new mappers are trained.

Personally, what I believe would be really powerful is the creation of a way 
for new mappers to understand the importance of high quality mapping.

For instance, if it were possible within ID Editor to not only highlight 
overlapping buildings but ALSO explain why overlapping buildings have an 
impact, then people would be able to relate and therefore change their 
behaviours.

For example, the tool could highlight that overlapping buildings can result in 
inaccurate population density calculations which can have an impact on 
humanitarian response (see previous messages from Pierre Belland's HOT mailing 
list post on the DRC as a case study). If we can explain this to people in a 
compelling way, I believe the quality of the mapping would improve.

If something could be built within the current tool set (e.g. embedded 
text/video within ID validation) this should hopefully ensure consistency.

Combining such tweaks with real-time monitoring tools, such as Bjoern suggests, 
should improve quality at mapathons.

Essentially, people attend Missing Maps mapathons to contribute to a worthy 
cause. People wish to map the best they can, so if more (and consistent) 
support is offered, the quality will improve.

Thanks

Steve
________________________________
From: Jean-Marc Liotier <j...@liotier.org>
Sent: 12 December 2018 22:30
To: t...@openstreetmap.org; hot@openstreetmap.org
Subject: Re: [HOT] Quality (was: The point on the OSM Response to the DR Congo 
Nord Kivu Ebola outbreak)

On 12/12/18 2:16 AM, Ralph Aytoun wrote:

I am also concerned about the quality of the mapping that is tying up projects 
because it takes up so much validation time. [..]

This perception is (don't take it personally - I answer your message but I'm 
not singling you out) a symptom of a widespread problem: quality perceived as a 
separate activity, an extra cost tacked on the actual productive work.

Considering the quality assurance process as a distinct set of activities has 
the very unfortunate effect of creating an unnecessary conflict with production.

So:
- Start with a clearly defined objective quality goal, just adequate for the 
planned purpose of the data
- Teach contributors that not meeting this goal is worse than doing nothing: 
negative value
- Monitor contributions in real time, to catch deviations before they 
snowball... I love Bjoern's idea, though OSMCHA works for me
- Reiterate !

Quality is the essence of the whole activity, not a distinct step.

Yes, it spoils the fun for new contributors thrilled to start mapping away and 
see their gamified metrics take off spectacularly in a rain of digital 
achievement awards. But it also helps them make sense of what they are doing 
instead of launching them on an open ended trip with a hazy purpose - and what 
is better than to find meaning in a task ?

Normative leadership may feel incompatible with a flat collaborative forum such 
as Openstreetmap, but it makes sense within a directed project with a declared 
purpose, to which contributors voluntarily participate. If they trust the 
project leadership enough to join as contributors, they may expect the 
normative guidance and even be disappointed not to feel it from the leadership.
_______________________________________________
HOT mailing list
HOT@openstreetmap.org
https://lists.openstreetmap.org/listinfo/hot

Reply via email to