Using AI to Build Explainable AI with Intel Optimizations and DarwinAI - Intel on AI Episode 25
14 Aug 2019
About this Podcast
Deep neural networks (DNNs), which are arguably the most powerful form of AI today, are difficult to build, run, and explain. Such challenges constitute significant roadblocks for their adoption in their enterprise. Ironically, AI itself can be used to assist data scientists and developers to build and evaluate DNNs. Sheldon Fernandez, CEO of DarwinAI, joins us to talk about how DarwinAI is using this ‘AI building AI’ method in their Generative Synthesis platform.
Sheldon explains how their technology reduces the complexity in designing high-performance deep learning solutions and also facilitates explainable deep learning, which allows a user to understand why a network makes the decisions it does. Finally, he describes a recent analysis conducted by the Intel® AI Builders team, where Darwin-generated networks coupled with Intel® Optimizations for TensorFlow* were able to deliver up to 16.3X performance increase on ResNet50 and up to 9.6X on NASNet workloads (see Solution Brief here). To learn more, visit darwinai.ca/.