Drawing on years of fieldwork, MIT researchers have developed a taxonomy to help developers design features that will be easier for their target audience to understand. To build the taxonomy, the researchers defined properties that make features interpretable for five types of users, from artificial intelligence experts to the people affected by a machine-learning model’s prediction. They also offer instructions for how model creators can transform features into formats that will be easier for a layperson to comprehend.
The researchers hope their work will inspire model builders to focus on explainability from the beginning of the development process.
Building off this work, the researchers are developing a system that enables a model developer to handle complicated feature transformations more efficiently, to create human-centered explanations for machine-learning models. This new system will also convert algorithms designed to explain model-ready datasets into formats decision-makers can understand.
Click here to read the original article published by SciTechDaily.
Please give your feedback on this article or share a similar story for publishing by clicking here.