Explainable Machine Learning with LIME and H2O in R
half-circle
vector

Explainable Machine Learning with LIME and H2O in R

Highlights

Welcome to this hands-on, guided introduction to Explainable Machine Learning with LIME and H2O in R. By the end of this project, you will be able to use the LIME and H2O packages in R for automatic and interpretable machine learning, build classification models quickly with H2O AutoML and explain and interpret model predictions using LIME. Machine learning (ML) models such as Random Forests, Gradient Boosted Machines, Neural Networks, Stacked Ensembles, etc., are often considered black boxes. However, they are more accurate for predicting non-linear phenomena due to their flexibility. Experts agree that higher accuracy often comes at the price of interpretability, which is critical to business adoption, trust, regulatory oversight (e.g., GDPR, Right to Explanation, etc.). As more industries from healthcare to banking are adopting ML models, their predictions are being used to justify the cost of healthcare and for loan approvals or denials. For regulated industries that use machine learning, interpretability is a requirement. As Finale Doshi-Velez and Been Kim put it, interpretability is "The ability to explain or to present in understandable terms to a human.". To successfully complete the project, we recommend that you have prior experience with programming in R, basic machine learning theory, and have trained ML models in R. Note: This course works best for learners who are based in the North America region. We're currently working on providing the same experience in other regions.

About the Course Provider

Coursera provides access to more than 3000+ courses across a wide variety of subjects in parntership with different universities and organizations.

Course by

  • self
    Self paced
  • dueration
    Duration 2 hours
  • domain
    Domain Data Science & AI
  • subs
    Monthly Subscription Option not available
  • fee
    Buy Now Free
  • language
    Language English