AI-powered bias detection for datasets and ML models — with fairness metrics, natural language reports, and explainability tools.
-
Updated
Oct 2, 2025 - Python
AI-powered bias detection for datasets and ML models — with fairness metrics, natural language reports, and explainability tools.
Demo's of FairLearn and InterpretML as described in my article on responsible AI.
🔬 Drop in any ML model → get SHAP explainability, fairness audit & drift detection in seconds
Student Success Model (SSM)
A comprehensive fairness analysis of the Boston Residents Jobs Policy compliance data using Fairlearn to detect and mitigate bias in construction project employment
Bias-aware machine learning system for fair credit risk assessment | 67% bias reduction | Fairlearn + SHAP
End-to-end bias audit of healthcare ML models using MEPS dataset. Detects racial disparities, applies 4 mitigation techniques (AIF360), explains predictions (SHAP/LIME), and visualizes findings via Streamlit dashboard.
Add a description, image, and links to the fairlearn topic page so that developers can more easily learn about it.
To associate your repository with the fairlearn topic, visit your repo's landing page and select "manage topics."