Amit Sheth

Amit Sheth

Founding Director,
Artificial Intelligence Institute and Professor,
Computer Science & Engineering
South Carolina

Professor Sheth’s current interests include Artificial Intelligence (esp. knowledge graphs, NLP, deep learning, knowledge-enhanced learning, conversational AI- esp chatbots for health and education), Semantic Web, Physical/IoT-Cyber-Social-Clinical Big Data, Augmented Personalized Health, AI and Big Data applications (in health and life sciences, social good, disaster management, etc.).

Semantics of the Black-Box: Using knowledge-infused learning approach to make AI systems more interpretable and explainable

Amit Sheth

The recent series of innovations in deep learning have shown enormous potential to impact individuals and society, both positively and negatively. The deep learning models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of deep learning models and their over-reliance on massive amounts of data condensed into labels and dense representations poses challenges for interpretability and explainability of the system.

Furthermore, deep learning methods have not yet been proven in their ability to effectively utilize relevant domain knowledge and experience critical to human understanding. This aspect is missing in early data-focused approaches and necessitated knowledge-infused learning and other strategies to incorporate computational knowledge. Rapid advances in our ability to create and reuse structured
knowledge as knowledge graphs make this task viable. In this talk, we will outline how knowledge, provided as a knowledge graph, is incorporated into the deep learning methods using knowledge-infused learning. We then discuss how this makes a fundamental difference in the interpretability and explainability of current approaches, and illustrate it with examples relevant to a few domains.

Index Terms—Knowledge Graphs, Knowledge Infusion, NeuroSymbolic AI, Explainability, Interpretability, Black-Box Deep Learning

back to top