How well do explanation methods for machine-learning models

How well do explanation methods for machine-learning models work?

Feature-attribution methods are used to determine if a neural network is working correctly when completing a task like image classification. MIT researchers developed a way to evaluate whether these feature-attribution methods are correctly identifying the features of an image that are important to a neural network’s prediction.

Related Keywords

Yilun Zhou , Julie Shah , Marco Tulio Ribeiro , Microsoft Research , Artificial Intelligence Laboratory , Interactive Robotics Group , National Science Foundation , Computer Science , Serena Booth , National Science , Explainable Artificial Intelligence , Eature Attribution , Image Classification ,

© 2025 Vimarsana