AI systems could be consistently evaluated and tested against predetermined indicators for bias, and results be made public to incentivise improvements and provide necessary information to other stakeholders engaging with such systems. Such regulatory frameworks need not be hardcoded.
“Unpredictability may be something we look for in intelligence, and if so, then by definition, a true intelligence will be unpredictable and therefore uninterpretable,” says Toyama.