Webinar—Humble AI: Build Guardrails Against Overconfidence

Highlighted
Community Team
Community Team

As AI becomes ubiquitous, more and more high-stakes decisions will be made automatically by machine learning models. AI can determine the very future of your business and can make life-or-death decisions for real people.

But as the world changes, an AI system is often faced with new examples that it hasn’t seen before, and it may not know the right answer. Without proper guardrails, unchecked automated decisions can quickly turn into catastrophic failures and can reduce trust in AI. As the stakes get higher, it is critical that AI systems are built to be humble: just like humans, AI should understand when it doesn’t know the right answer.

With DataRobot’s Humble AI, models that aren’t confident in their predictions can respond accordingly, whether that means defaulting to a "safe" decision, alerting an administrator for human review, or not making a prediction at all.

Join this webinar to learn how to:

  • Understand the limitations of your model and when they may need human intervention.
  • Create a comprehensive set of Humble AI triggers that will protect from common failures of model overconfidence.
  • Monitor your model over time for new errors and sources of overconfidence.
  • Build robust, fault-tolerant, humble AI systems using DataRobot’s Humble AI feature in MLOps.

Host:

  • Jett Oristaglio (DataRobot Data Science and Product Lead)

Click here to access the recording.

(Also, we've attached slides from the presentation to this article—see below.)

DataRobot_Humble_AI_Resource_card_v.1.0.jpg

 

1 Reply
Highlighted
Community Team
Community Team

@ all - feel free to ask any questions you may have here (click Reply), ahead of the webinar so they can be addressed as part of the webinar. Also, make sure to follow up with discussions back here with Jett after the live webinar.  

0 Kudos