Build more inclusive TensorFlow pipelines with fairness indicators / Tulsee Doshi, Christina Greer.

On-screen presenter
Doshi, Tulsee [Browse]
Format
Video/Projected medium
Language
English
Published/​Created
[Place of publication not identified] : O'Reilly Media, 2020.
Description
1 online resource (1 streaming video file (35 min., 58 sec.)) : digital, sound, color

Details

Subject(s)
On-screen presenter
Summary note
"Machine learning (ML) continues to drive monumental change across products and industries. But as we expand the reach of ML to even more sectors and users, it's ever more critical to ensure that these pipelines work well for all users. Tulsee Doshi and Christina Greer outline their insights from their work in proactively building for fairness, using case studies built from Google products. They also explain the metrics that have been fundamental in evaluating their models at scale and the techniques that have proven valuable in driving improvements. Tulsee and Christina announce the launch of Fairness Indicators and demonstrate how the product can help with more inclusive development. Fairness Indicators is a new feature built into TensorFlow Extended (TFX) and on top of TensorFlow Model Analysis. Fairness Indicators enables developers to compute metrics that identify common fairness risks and drive improvements. You'll leave with an awareness of how algorithmic bias might manifest in your product, the ways you could measure and improve performance, and how Google's Fairness Indicators can help."--Resource description page.
Notes
Title from resource description page (viewed July 21, 2020).
Participant(s)/​Performer(s)
Presenter, Tulsee Doshi, Christina Greer.
OCLC
1176539492
Statement on responsible collection description
Princeton University Library aims to describe library materials in a manner that is respectful to the individuals and communities who create, use, and are represented in the collections we manage. Read more...
Other views
Staff view

Supplementary Information