Skip navigation
Making Machine Learning Interpretable

Making Machine Learning Interpretable

Date: Tuesday, August 28, 2018
Time: 02:00 PM Eastern Daylight Time
Duration: 1 hour

While understanding and trusting models and their results is a hallmark of good (data) science, model interpretability is a serious legal mandate in the regulated verticals of banking, insurance, and other industries. Moreover, scientists, physicians, researchers, and humans in general, have the right to understand and trust the models and modeling results that affect their work and their lives. Today, many are embracing deep learning and machine learning techniques, but what happens when people want to explain these impactful, complex technologies or when these technologies inevitably make mistakes?

Patrick Hall will share several approaches beyond the error measures and assessment plots typically used to interpret deep learning and machine learning models and results. Wherever possible, interpretability approaches are deconstructed into more basic components suitable for human storytelling: complexity, scope, understanding, and trust.

Topics include:

  • Data visualization techniques for representing high-degree interactions and nuanced data structures
  • Contemporary linear model variants that incorporate machine learning and are appropriate for use in regulated industry
  • Cutting-edge approaches for explaining extremely complex deep learning and machine learning models.

Register Now!

If you have already registered, click here to access

Speakers:

Patrick Hall is a data scientist specializing in automated organizational decision making and interpretable artificial intelligence. He’s an adjunct professor at George Washington University in the Department of Decision Sciences where he teaches graduate classes in data mining and machine learning. He’s currently Senior Director of Product at H2O.ai. Prior to that he’s held R&D research roles at SAS.

Patrick holds multiple patents in automated market segmentation using clustering and deep neural networks. He studied computational chemistry at the University of Illinois before graduating from the Institute for Advanced Analytics at North Carolina State University. He’s also a regular contributor to O’Reilly Media.