Drudgery is evil, well written bots save us from drudgery, and allow us
to use human time more productively, therefore well written bots are
good.

Why should a human clean up whitespace, or add the "cuisine" tag to a
hundred "Burger King" branches? Shouldn't our creative brains invest
their time elsewhere?

I don't think a generic sweeping rule is a good idea. Each bot should
be analyzed individually. Badly written bots, the ones that add work
rather than save work, should be stopped. Namespace or database
separation mean the bots cannot as effectively save us from drudgery.
The point of good bots is to save the work that needs to be done on the
actual OSM data.

However, I do think the "burden of proof" lies on the bot owner; It is
the owner's job to explain why the bot is needed, why it is "good", and
to document it extensively and make its operations very transparent. I
try *very hard* to do that with my scripts. Every single changeset and
algorithm is logged at my talk page, all changesets have the bot=yes
tag, a dedicated bot user is used, etc.

Again, I think every case is to be handled individually, with BOP on
the bot owner. bad/undocumented/undiscussed/non-transparent bots are
the problem.

https://wiki.openstreetmap.org/wiki/User:SafwatHalaby#SafwatHalaby_bot

_______________________________________________
talk mailing list
talk@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk

Reply via email to