FairSight is an AI auditing and governance platform that systematically detects, explains, and mitigates algorithmic bias in automated hiring systems — enabling transparent, ethically accountable recruitment decisions.
AI systems trained on historical corporate data inevitably absorb and codify preexisting societal biases — perpetuating inequality at unprecedented scale.
Between 2014 and 2017, Amazon developed a machine learning recruiting engine trained on a decade of historical hiring data — data that heavily reflected the long-standing male dominance within the tech industry. The AI systematically downgraded resumes containing terms like "women's chess club captain" and penalised graduates of all-female colleges. Amazon eventually shelved the project after discovering the discriminatory outputs.
Models learn to use facially neutral features — like school names or postal codes — as surrogates for protected characteristics, leading to indirect discrimination.
Organisations frequently cannot explain why a system made a particular decision. This opacity strips individuals of their right to understand automated decisions made about their lives.
Corporate managers hold complete control while the most affected stakeholders remain powerless to contest algorithmic outcomes.
The human consequences of algorithmic bias are severe, cumulative, and disproportionately affect vulnerable groups.
Women and ethnic minorities are systematically screened out of opportunities for which they are entirely qualified.
Algorithms falsely flag minority defendants as high-risk at vastly disproportionate rates compared to white defendants.
Predictive algorithms systematically underestimate the medical needs of marginalised patients due to historically biased cost data.
Voluntary corporate commitments to responsible AI have proven unenforceable and often purely performative.
Engineer-centric tools provide useful metrics but don't generate explanations that affected applicants or non-technical recruiters can understand.
Ethical evaluations are treated as a separate post-deployment audit rather than an integrated part of the core development workflow.
Existing tools rely on surface-level fixes that merely flag output disparities without tracing bias back to specific proxy features in training data.
No widely accessible platform integrates seamlessly into corporate workflows and shifts power back to affected users with actionable recourse.
FairSight is designed to fill precisely this gap.
An AI auditing and governance platform that systematically detects, explains, and mitigates algorithmic bias in automated hiring systems.
Each feature is prioritised based on the severity and root cause of harm — beginning with detection, followed by explanation, mitigation, and stakeholder empowerment.
Continuously audits hiring models to identify discriminatory patterns in decision-making, particularly those arising from biased training data. Evaluates outputs using statistical fairness metrics such as demographic parity and disparate impact analysis.
Provides interpretable insights into how hiring decisions are made. Presents decision factors — keywords, experience weighting, inferred patterns — in a human-readable format for both recruiters and applicants.
Actively reduces identified biases by adjusting model behaviour through dataset rebalancing, fairness-aware reweighting, and controlled retraining. Enables organisations to correct discriminatory patterns before deployment.
Enables affected individuals — particularly applicants — to request explanations, flag concerns, and challenge automated decisions. Introduces a human-in-the-loop mechanism for recruiters and compliance officers.
FairSight recognises that different groups interact with AI systems in fundamentally different ways and possess varying levels of power and awareness.
Actionable insights into model behaviour, enabling informed decisions rather than blind reliance on algorithmic outputs. Integrates into existing workflows.
Diagnostic tools to identify, interpret, and correct bias at the model level. Embeds ethical evaluation directly into the development pipeline.
Audit reports and transparency mechanisms that translate technical findings into policy-relevant insights — no deep technical expertise required.
Fairer evaluation processes, explanations for decisions, and avenues for contesting outcomes. The most affected, now the most protected.
Developer-facing toolkits like IBM AI Fairness 360 flag bias for the organisation. The applicant sees nothing. FairSight's Stakeholder Feedback Interface gives applicants a direct pathway to understand and dispute decisions. No comparable toolkit does this.
Most bias detection tools produce outputs readable only by ML engineers. FairSight's Explainability Dashboard is built for applicants and non-technical recruiters — fulfilling both GDPR Article 22 and ACM 2.7 requirements.
Ethics tools that require pipeline overhauls rarely get adopted. FairSight integrates into existing CI/CD workflows, making ethical evaluation part of the development process rather than an additional obligation sitting outside it.
Amazon's engineers attempted fixes and the system found new ways to discriminate because source features were never identified. FairSight's detection engine traces disparities to specific training data features — the kind of comprehensive risk assessment ACM 2.5 requires.
What happened at Amazon was not a one-off failure. The same conditions exist in nearly every sector where AI is used to make decisions about people's lives.
Detect proxy discrimination in hiring datasets. Ensure candidates are evaluated on merit, not biased historical patterns.
Audit lending algorithms to ensure credit is distributed based on true financial health rather than biased historical metrics.
Proactively evaluate patient management models to prevent the systemic underestimation of marginalised medical needs.
Continuously audit risk assessment tools to ensure predictive algorithms do not codify racial disparities under the guise of objective mathematics.
Classifies automated recruitment systems as high-risk AI, requiring conformity assessments, transparency documentation, and ongoing monitoring.
Establishes that individuals subject to automated decisions have the right to human review and the right to contest outcomes.
Computing professionals must provide thorough evaluations, foster fairness, and foster public awareness of how systems affect people.
FIT1055 IT Professional Practice and Ethics
FairSight is a conceptual product proposal created solely for academic purposes as part of a university assignment for FIT1055 IT Professional Practice and Ethics at Monash University. This website and the ideas presented within it are not intended to represent a real product, compete with any existing commercial platform, or solicit investment of any kind. All references to third-party tools, companies, and research are used for educational and analytical purposes only. No affiliation with Amazon, IBM, Google, or any other organisation mentioned is implied or intended.