If there are some issues that tool edits should be reviewed differently than bot edits, then it is just another reason to make a separate flag independent from bot flag for these edits. That way both tool and bot edits could be filtered out and reviewed separately.
On Fri, Mar 6, 2015 at 5:31 PM, Petr Bena <[email protected]> wrote: > I randomly opened RecentChanges page on enwiki and this is what I saw: > http://img.ctrlv.in/img/15/03/06/54f9d5645eb03.png from 50 edits, at > least 8 were automated, just as much interesting as any regular bot > edit. > > It usually is even worse, anyway as you can see about 20% of all edits > you can see now in recent changes are automated "bot-like" edits made > by humans. When I enable "show bots" from 50 edits I see 1 edit made > by a bot. From simple observing of recent changes you will see that > bots are producing far less edits than users with automated tools. > Still bots are problem that needs to be filtered out, while these > users are not? > > This was originally my point. I don't really care if we just extend > bot flag for regular users as well, or if we create a new flag, but we > should do something about this. It would definitely make life of many > users easiers, especially those who actively review the contributions > of others. > > On Fri, Mar 6, 2015 at 5:13 PM, Brad Jorsch (Anomie) > <[email protected]> wrote: >> <Note this reply is written with my enwiki community member hat on, and in >> no way represents anything official> >> >> On Fri, Mar 6, 2015 at 5:19 AM, Ricordisamoa <[email protected]> >> wrote: >> >>> It is complex and bureaucratic on the English Wikipedia, i.e., less than >>> 1/890 of the projects. >>> >> >> I note that enwiki's process for receiving the bot flag and rules around >> bot editing are "complex and bureaucratic" in large part because what one >> person thinks is an obvious fix that no one could object to (e.g. >> "==Section==" versus "== Section ==") turns out result in a huge outcry >> when a bot is doing it all over the place. >> >> The idea is that the review process (which is basically just having one of >> a list of experienced bot operators look over the proposal for problems, >> then review some sample edits) will hopefully catch problems before they >> become a big deal, and the rules make it easier to stop for (hopefully) >> calm discussion rather than arguing while perceived disruption continues. >> >> Instead, I think bots are easily tricked by edge cases, whereas human >>> intervention usually decreases the chance of mistakes. >>> >> >> On the other hand, a tool may be more aggressive with proposing changes >> that would be fooled by edge cases while relying on the human to fix it >> before submitting. Even if the tool is not being more aggressive, the human >> is vulnerable to missing an error through inattention or through >> misunderstanding their responsibility and blindly clicking "approve". >> _______________________________________________ >> Wikitech-l mailing list >> [email protected] >> https://lists.wikimedia.org/mailman/listinfo/wikitech-l _______________________________________________ Wikitech-l mailing list [email protected] https://lists.wikimedia.org/mailman/listinfo/wikitech-l
