This is a good discussion. Chiming in here to share some additional
thoughts and work we’ve been doing on the Tasking Manager this fall.

I agree with what people have said. Quality isn’t just a “let’s improve how
we validate” problem. This also isn’t only an editor problem — through many
of the mapathons, the Tasking Manager is an entry point for new mappers and
can be a source of both the problems and the solution. It’s a
multi-factored problem and we're trying to work on the Tasking Manager side
to help out here. TM developers are definitely aware of the fact that many
new mapathon mappers start using an OSM editor through the TM and so there
is a chance to greatly support quality improvements here.

As Steve mentioned at the HOT Summit we started to dig into some of these
problems through a couple workshops.  Notes from the two workshop sessions
are here if anyone is interested:
  - Data quality improvements workshop:
https://github.com/hotosm/hot-summit-2018/wiki/Design-Workshop-:-Data-Quality-Validation-Improvements-with-TM-iD-Editor
  - AI & ML workshop:
https://github.com/hotosm/hot-summit-2018/wiki/Design-Workshop-:-AI-&-Machine-Learning---Integrating-into-HOT-Tools

As we’ve been working on Tasking Manager this fall and preparing for some
additional development this spring, we’ve been digging into a couple items
related to quality:

  1. Onboarding. Onboarding not only starts with good training at mapathons
but can happen in app as well. We’ve only just started to dig into many
ideas on ways that we can improve the training aspect of what people need
to know before they start mapping. Some notes from a recent design convo we
had:
https://github.com/hotosm/tasking-manager/wiki/Onboarding-Idea-Generation-Session-Notes

  2. New mapper mapping experience. In relation to onboarding and training,
the type of mapping or the way data is exposed to new mappers is vastly
different from a well-trained, experienced mapper. The Tasking Manager
workflow currently tries to meet both mapper needs in the middle - which
might be part of the problem at the moment. In January we’re going to be
taking a look at the entire experience to dig into where and how things
should change to improve on this front.

  3. Testing ML as a quality support tool. Machine learning outputs can be
a huge support here and that hasn’t been well tested. As we’ve been working
on the first part of the ML strategy HOT outlined this fall, giving
real-time feedback to a mapper will be extremely helpful in improving the
data quality:
https://www.hotosm.org/updates/integrating-machine-learning-into-the-tasking-manager/.
What the exactly looks like is yet to be determined, but we’re hoping to
have a prototype in January about how ML can be used to integrate a
complexity measure into a task grid square and ultimately can help set
mapping expectations.

Along with volunteering opportunities to work on technology projects with
HOT, we do have a job opening that will include working on the Tasking
Manager: https://www.hotosm.org/jobs/technical-project-manager/.



On Thu, Dec 13, 2018 at 7:50 AM Stephen Penson <stephen.pen...@hotmail.co.uk>
wrote:

> To build on Jean- Marc's point, one thing I raised at the HOT Summit and
> also recently to the London Missing Maps team is the need to tackle the
> errors at the source. Having validators is vital, but I believe we can
> improve the initial mapping through a few tweaks in the way new mappers are
> trained.
>
> Personally, what I believe would be really powerful is the creation of a
> way for new mappers to understand the importance of high quality mapping.
>
> For instance, if it were possible within ID Editor to not only highlight
> overlapping buildings but ALSO explain why overlapping buildings have an
> impact, then people would be able to relate and therefore change their
> behaviours.
>
> For example, the tool could highlight that overlapping buildings can
> result in inaccurate population density calculations which can have an
> impact on humanitarian response (see previous messages from Pierre
> Belland's HOT mailing list post on the DRC as a case study). If we can
> explain this to people in a compelling way, I believe the quality of the
> mapping would improve.
>
> If something could be built within the current tool set (e.g. embedded
> text/video within ID validation) this should hopefully ensure consistency.
>
> Combining such tweaks with real-time monitoring tools, such as Bjoern
> suggests, should improve quality at mapathons.
>
> Essentially, people attend Missing Maps mapathons to contribute to a
> worthy cause. People wish to map the best they can, so if more (and
> consistent) support is offered, the quality will improve.
>
> Thanks
>
> Steve
> ------------------------------
> *From:* Jean-Marc Liotier <j...@liotier.org>
> *Sent:* 12 December 2018 22:30
> *To:* t...@openstreetmap.org; hot@openstreetmap.org
> *Subject:* Re: [HOT] Quality (was: The point on the OSM Response to the
> DR Congo Nord Kivu Ebola outbreak)
>
> On 12/12/18 2:16 AM, Ralph Aytoun wrote:
>
> I am also concerned about the quality of the mapping that is tying up
> projects because it takes up so much validation time. [..]
>
> This perception is (don't take it personally - I answer your message but
> I'm not singling you out) a symptom of a widespread problem: quality
> perceived as a separate activity, an extra cost tacked on the actual
> productive work.
>
> Considering the quality assurance process as a distinct set of activities
> has the very unfortunate effect of creating an unnecessary conflict with
> production.
>
> So:
> - Start with a clearly defined objective quality goal, just adequate for
> the planned purpose of the data
> - Teach contributors that not meeting this goal is worse than doing
> nothing: negative value
> - Monitor contributions in real time, to catch deviations before they
> snowball... I love Bjoern's idea, though OSMCHA works for me
> - Reiterate !
>
> Quality is the essence of the whole activity, not a distinct step.
>
> Yes, it spoils the fun for new contributors thrilled to start mapping away
> and see their gamified metrics take off spectacularly in a rain of digital
> achievement awards. But it also helps them make sense of what they are
> doing instead of launching them on an open ended trip with a hazy purpose -
> and what is better than to find meaning in a task ?
>
> Normative leadership may feel incompatible with a flat collaborative forum
> such as Openstreetmap, but it makes sense within a directed project with a
> declared purpose, to which contributors voluntarily participate. If they
> trust the project leadership enough to join as contributors, they may
> expect the normative guidance and even be disappointed not to feel it from
> the leadership.
> _______________________________________________
> HOT mailing list
> HOT@openstreetmap.org
> https://lists.openstreetmap.org/listinfo/hot
>


-- 

*Nate Smith*
Director of Technology Innovation
n...@hotosm.org <tyler.radf...@hotosm.org>
@nas_smith

*Humanitarian OpenStreetMap Team*
*Using OpenStreetMap for Humanitarian Response & Economic Development*
web <http://hotosm.org/> | twitter <https://twitter.com/hotosm> | facebook
<https://www.facebook.com/hotosm> | donate <https://donate.hotosm.org>

[image: Donate] <https://mapthedifference2019.causevox.com/tyler>
_______________________________________________
HOT mailing list
HOT@openstreetmap.org
https://lists.openstreetmap.org/listinfo/hot

Reply via email to