Science and Research Content

Building Explainability into Machine-Learning Models to Make Features Understandable -


Explanation methods often describe how much certain features in a machine-learning model contributed to its prediction. Nevertheless, if these features were complex or convoluted, explanation methods could not help users understand them. Hence, MIT researchers are striving to improve the interpretability of features by creating a taxonomy to help developers design features that their target audience can understand.

To build the taxonomy, the researchers defined properties that make features interpretable for all types of users. In addition, model creators were offered instructions for transforming features into formats that a layperson could understand. The researchers hope their work will inspire model builders to consider using interpretable features from the beginning of the development process.

The researchers are also developing a system that would enable model developers to handle feature transformations efficiently and create human-centered explanations for machine-learning models. This new system will also convert algorithms designed to explain model-ready datasets into formats decision-makers can understand.

Click here to read the original article published by MIT News.

STORY TOOLS

  • |
  • |

Please give your feedback on this article or share a similar story for publishing by clicking here.


sponsor links

For banner adsĀ click here