Skip to search
Skip to main content
Search in
Keyword
Title (keyword)
Author (keyword)
Subject (keyword)
Title starts with
Subject (browse)
Author (browse)
Author (sorted by title)
Call number (browse)
search for
Search
Advanced Search
Bookmarks
(
0
)
Princeton University Library Catalog
Start over
Cite
Send
to
SMS
Email
EndNote
RefWorks
RIS
Printer
Bookmark
Build more inclusive TensorFlow pipelines with fairness indicators / Tulsee Doshi.
Author
Doshi, Tulsee
[Browse]
Format
Video/Projected medium
Language
English
Εdition
1st edition.
Published/Created
O'Reilly Media, Incorporated, 2020.
Description
1 online resource.
Details
Subject(s)
Artificial intelligence
[Browse]
Machine learning
[Browse]
Author
Greer, Christina
[Browse]
Related name
Safari, an O'Reilly Media Company
[Browse]
Library of Congress genre(s)
Video recordings
[Browse]
Series
Safari Books Online (Series)
[More in this series]
Summary note
Machine learning (ML) continues to drive monumental change across products and industries. But as we expand the reach of ML to even more sectors and users, it's ever more critical to ensure that these pipelines work well for all users. Tulsee Doshi and Christina Greer outline their insights from their work in proactively building for fairness, using case studies built from Google products. They also explain the metrics that have been fundamental in evaluating their models at scale and the techniques that have proven valuable in driving improvements. Tulsee and Christina announce the launch of Fairness Indicators and demonstrate how the product can help with more inclusive development. Fairness Indicators is a new feature built into TensorFlow Extended (TFX) and on top of TensorFlow Model Analysis. Fairness Indicators enables developers to compute metrics that identify common fairness risks and drive improvements. You'll leave with an awareness of how algorithmic bias might manifest in your product, the ways you could measure and improve performance, and how Google's Fairness Indicators can help. Prerequisite knowledge A basic understanding of TensorFlow (useful but not required) What you'll learn Learn how to tactically identify and evaluate ML fairness risks using Fairness Indicators.
Copyright note
Copyright © O'Reilly Media, Incorporated.
Issuing body
Made available through: Safari, an O'Reilly Media Company.
Source of description
Online resource; Title from title screen (viewed February 28, 2020)
Participant(s)/Performer(s)
Presenter, Tulsee Doshi, Christina Greer.
OCLC
1143019008
Other standard number
0636920373391
Statement on language in description
Princeton University Library aims to describe library materials in a manner that is respectful to the individuals and communities who create, use, and are represented in the collections we manage.
Read more...
Other views
Staff view
Ask a Question
Suggest a Correction
Report Harmful Language
Supplementary Information
Other versions
Build more inclusive TensorFlow pipelines with fairness indicators / Tulsee Doshi, Christina Greer.
id
99130929486006421