Businesses discover that their decision-makers often reject or doubt Artificial Intelligence (AI) systems. The primary reason is the inability to understand the factors that the AI used to come to a decision. According to the researchers at MIT, building a taxonomy that is inclusive to all types of people who interact with a Machine Learning (ML) model would resolve this issue.
Furthermore, the taxonomy would cover how best to explain and interpret different features. Besides, it will inform how to transform hard-to-understand features into easier-to-understand formats for non-technical users. With this, end users would be better inclined to trust the decisions an ML model makes, while also being able to describe accurately why it has come to that decision.
The next step for MIT researchers is to develop a system for developers to handle feature-to-format transformations faster, which should improve the time-to-market for ML models.
Click here to read the original article published by RTInsights.
Please give your feedback on this article or share a similar story for publishing by clicking here.