Google AI today released TensorFlow Constrained Optimization (TFCO), a supervised machine learning library built for training machine learning models on multiple metrics and “optimizing inequality-constrained problems.”
The library is designed to help address issues like fairness constraints and predictive parity and help machine learning practitioners better understand things like true positive rates on residents of certain countries, or recall illness diagnoses depending on age and gender.
In tests with a Wikipedia data set, the library achieved lower false-positive rates when predicting whether a comment on a Wiki is toxic based on race, religion, gender identity, or sexuality, while maintaining similar accuracy rates.
TFCO is made to “take into account the societal and cultural factors necessary to satisfy real-world requirements,” said Andrew Zaldivar on behalf of the TFCO team today in a Google AI blog post.
“The ability to express many fairness goals as rate constraints can help drive progress in the responsible development of machine learning, but it also requires developers to carefully consider the problem they are trying to address,” he said. “A ‘safer’ alternative is to constrain each group to independently satisfy some absolute metric, for example by requiring each group to achieve at least 75% accuracy. Using such absolute constraints rather than relative constraints will generally keep the groups from dragging each other down.”
The library, which includes an optional “two-dataset” approach to improving generalization, is built on a trio of research papers published last year, according to the library GitHub page.
The release of the TFCO library comes a day after Google removed gendered labels from the Cloud Vision API.