Explaining the explainability of AI has been saved
Explaining the explainability of AI
Part of the AI Ignition video and podcast series, brought to you by the Deloitte AI Institute, intended to ignite applied AI conversations in the Age of With™
Why is it important to understand the difference between what the AI machine sees versus what the human sees? Jana Eggers discusses what explainability is and why it is important in AI.
What is the importance of explainability in AI?
How can we understand the difference between what the AI machine sees versus what the human sees? Explainability is how (and the why). Join Jana Eggers, CEO of Nara Logics and self-proclaimed math and computer nerd, as she explains the importance of explainability in AI.
Make AI boring again.
Jana Eggers has over 25 years of technology and leadership experience and is the current CEO of Nara Logics, an organization that uses synaptic intelligence to help companies make better decisions. Jana is a frequent speaker, writer, and mentor on AI and startups.
Watch the video interview here:
Ignite your AI curiosity by exploring Deloitte’s future of AI in the enterprise video and podcast series
Artificial intelligence continues to change the world around us at a dizzying pace.
Join Beena Ammanath, executive director of the Deloitte AI Institute and technology optimist, as she dives into the hottest topics and trends in artificial intelligence. Each episode will feature conversations with creators, implementers, collaborators, and experts exploring where AI began and where it’s going. Tune in each month to discover new episodes available via video interview or podcast.
Contact us for information on this podcast, or visit the AI Ignition library for the full collection of episodes.
Subscribe now on: iTunes | SoundCloud | Stitcher | Google Play | Spotify
AI Ignition: Exploring the future of AI in the enterprise
A video and podcast series intended to ignite applied AI conversations in the Age of With™
Bridging the ethics gap surrounding AI