Artificial intelligence could one day destroy the human race, but for now it will have to settle for removing bad Wikipedia edits.
The Wikimedia foundation is embracing machine learning to make the editing process more streamlined and forgiving for new contributors.
“Wikipedia is edited about half a million times per day, explains Wikimedia employee Aaron Halfaker. “In order to maintain the quality of Wikipedia, this firehose of new content needs to be constantly reviewed by Wikipedians. The Objective Revision Evaluation Service (ORES) functions like a pair of X-ray specs, the toy hyped in novelty shops and the back of comic books—but these specs actually work to highlight potentially damaging edits for editors. This allows editors to triage them from the torrent of new edits and review them with increased scrutiny.”
ORES works by combining data created by Wikipedia editors and open source machine learning algorithms. This enables it to judge the quality of edits and determine whether they are innocent mistakes or deliberately damaging. The service is scalable and currently supports 14 different language Wikipedias and Wikidata.
As well as improving the quality of edits, it is hoped that ORES will encourage more individuals to be Wikipedians. Editor numbers have fallen by 40 per cent over the past eight years, with many individuals unsure why their changes had been removed. ORES will treat genuine mistakes more sympathetically, with Halfaker saying that reverted edits should be accompanied by an explanatory message.
The service has been in testing for months and is now available for anyone to experiment with. Crucially, however, Halfaker stresses that ORES will not be forced upon editors and believes that it shouldn’t be views differently to other software changes made to the site, despite the use of artificial intelligence.