Science and Research Content

Making Artificial Intelligence Trustworthy By Putting Humans Back In the Loop -


As Artificial Intelligence (AI) becomes increasingly a part of our daily lives, the decisions and predictions, made by AI-enabled systems are becoming profound. However, there is no visibility on how AI-enabled systems arrive at these decisions and predictions. This lack of visibility gives rise to distrust. So, does it follow that if we bring a human element into the decision-making process, it will make AI-enabled systems trustworthy?

Scientists at Germany's Technische Universität Darmstadt have done new work with the intent of increasing people’s trust in Machine Learning (ML)—an application of AI. Lead author Patrick Schramowski and colleagues propose to have a human checking on the explanations provided by a neural network and fixing it if anything goes wrong. In doing so, their idea is to extend explainable AI and interpretable AI.

"It is necessary to learn and explain interactively, for the user to understand and appropriately build trust in the model's decisions," write Schramowski and colleagues in Right for the Wrong Scientific Reasons: Revising Deep Networks by Interacting with their Explanations. Their solution "XIL" stands for "explanatory interactive learning," and emphasizes on providing explanations of machine behavior and the exchange between person and machine. In their conclusion, the authors write that they hope to bring this interactive element to lots of other forms of explainable or interpretable AI.

However, the authors do not say, whether their approach can be used beyond the simple supervised classifiers they have used in their work. This unanswered question is important because feature discovery in deep learning is about finding things that might be unknown to humans. Therefore, it is not clear how a human could step into the loop to correct errors made by a machine in instances where the machine has surpassed the human's domain knowledge in some sense. This lack of clarity does not mean that the findings of Schramowski and colleagues are not applicable to more complex machine learning decision-making processes.

The work done by Schramowski and colleagues has a framework for interaction between humans and machines and a probable wish list of desired outcomes to work for. What remains to be explored is how broadly their work can be applied.

Click here to read the original article published by ZDNet.

STORY TOOLS

  • |
  • |

Please give your feedback on this article or share a similar story for publishing by clicking here.


sponsor links

For banner ads click here