Fairness-aware Configuration of Machine Learning Libraries

Authors: Saeid Tizpaz-Niari, Ashish Kumar, Gang (Gary) Tan, Ashutosh Trivedi

ICSE 2022 (2022)

accountableSE trustworthyAI

Abstract

Machine-learning (ML) software is increasingly deployed in socially critical applications where ensuring fairness is essential. While existing approaches to fairness typically modify the training dataset or the learning algorithm, the configuration of ML hyperparameters also significantly influences fairness outcomes. This paper investigates how hyperparameters can either amplify or mitigate discrimination present in datasets. We design three search-based software testing algorithms to uncover the precision-fairness frontier across the hyperparameter space and augment this with statistical debugging to explain how parameters affect fairness. We implement our techniques in Parfait-ML (PARameter FAIrness Testing for ML Libraries) and demonstrate effectiveness across five mature ML algorithms and six social-critical applications, identifying configurations that improve fairness without sacrificing precision. Surprisingly, some hyperparameter settings (e.g., restricted attribute search space for random forests) can amplify bias, and our findings corroborate similar observations in the literature. :contentReference[oaicite:1]{index=1}