Fiddler Labs Aims To Remove Artificial Intelligence Bias Inequities Through 039 Explainability In Ai News Today : Breaking News, Live Updates & Top Stories | Vimarsana
SHARE
Understanding how an artificial intelligence system generates its predictions is crucial for avoidance of bias, says a startup that has been developing what it calls “explainability in AI.”
A person’s application for a loan, for example, being rejected or approved by AI needs to be analyzed for fairness.
“You can actually create a dystopian world where some people get really great decisions from your systems and where some people are left out,” said Krishna Gade (pictured), co-founder and chief executive officer of Fiddler Labs Inc.
Modeling data can be old or sourced incorrectly. That, among other things, causes problems. Gade believes that the black-box nature of an AI or machine learning engine, where one can’t simply open up the code and read it like you can traditional software, needs special tools.