Researchers draw on ideas used in software development
‘Bug bounties’ are a commonly used tool to help spot errors in software. A report from a group of prominent AI researchers has proposed a similar approach as part of a ‘robust toolbox of mechanisms’ to verify claims for AI, reports the Financial Times. ‘Bias bounty hunters’ could include researchers, members of the public and journalists who find apparent bias when using AI-driven systems. The report is designed to move on from ‘abstract ethical concerns’ and focus on actionable solutions, says the newspaper. Institutions involved in the research include OpenAI, Google, the Alan Turing Institute and Cambridge University.
Read more here.