AI Ethics & Governance

Detect. Explain.
Eliminate Bias.

FairSight is an AI auditing and governance platform that systematically detects, explains, and mitigates algorithmic bias in automated hiring systems — enabling transparent, ethically accountable recruitment decisions.

4 Core Modules
ACM Ethics Aligned
GDPR Art. 22 Compliant
CI/CD Native Integration
Scroll to explore

Algorithmic Bias Is a Systemic Crisis

AI systems trained on historical corporate data inevitably absorb and codify preexisting societal biases — perpetuating inequality at unprecedented scale.

Real Case Study

Amazon's Biased AI Recruitment Tool (2018)

Between 2014 and 2017, Amazon developed a machine learning recruiting engine trained on a decade of historical hiring data — data that heavily reflected the long-standing male dominance within the tech industry. The AI systematically downgraded resumes containing terms like "women's chess club captain" and penalised graduates of all-female colleges. Amazon eventually shelved the project after discovering the discriminatory outputs.

"Bias is a fundamental data failure that perpetuates structural discrimination within corporate environments."

Data Bias & Proxy Discrimination

Models learn to use facially neutral features — like school names or postal codes — as surrogates for protected characteristics, leading to indirect discrimination.

🔒

Lack of Transparency

Organisations frequently cannot explain why a system made a particular decision. This opacity strips individuals of their right to understand automated decisions made about their lives.

Structural Power Imbalance

Corporate managers hold complete control while the most affected stakeholders remain powerless to contest algorithmic outcomes.

Who Gets Hurt

The human consequences of algorithmic bias are severe, cumulative, and disproportionately affect vulnerable groups.

👩‍💼
Job Seekers

Women and ethnic minorities are systematically screened out of opportunities for which they are entirely qualified.

Criminal Justice

Algorithms falsely flag minority defendants as high-risk at vastly disproportionate rates compared to white defendants.

🏥
Healthcare

Predictive algorithms systematically underestimate the medical needs of marginalised patients due to historically biased cost data.

Why Current Solutions Fall Short

Voluntary corporate commitments to responsible AI have proven unenforceable and often purely performative.

Engineer-centric tools provide useful metrics but don't generate explanations that affected applicants or non-technical recruiters can understand.

Ethical evaluations are treated as a separate post-deployment audit rather than an integrated part of the core development workflow.

Existing tools rely on surface-level fixes that merely flag output disparities without tracing bias back to specific proxy features in training data.

No widely accessible platform integrates seamlessly into corporate workflows and shifts power back to affected users with actionable recourse.

FairSight is designed to fill precisely this gap.

Introducing FairSight

An AI auditing and governance platform that systematically detects, explains, and mitigates algorithmic bias in automated hiring systems.

📄
Applicant Resumes CVs, cover letters, portfolios
🗄
Training Data Historical hiring records
🤖
AI Recruitment System Resume screening · Candidate scoring · Automated shortlisting
FairSight Platform
Bias Detection Engine Statistical fairness monitoring
Explainability Dashboard SHAP / LIME decision insight
Bias Mitigation Module Rebalancing, retraining, thresholds
Feedback Interface Human review, applicant recourse
Fair Shortlisted Candidates
Applicant Explanations
Bias Alerts & Audit Reports

Four Modules, One Mission

Each feature is prioritised based on the severity and root cause of harm — beginning with detection, followed by explanation, mitigation, and stakeholder empowerment.

01
ACM 1.4 — Algorithmic Discrimination

Bias Detection & Monitoring Engine

Continuously audits hiring models to identify discriminatory patterns in decision-making, particularly those arising from biased training data. Evaluates outputs using statistical fairness metrics such as demographic parity and disparate impact analysis.

  • Real-time monitoring integrated into existing recruitment pipelines
  • Detects proxy discrimination at the training data level
  • Computes mutual information between proxy features and protected attributes
  • Identifies bias as a structural issue, not isolated output errors
02
ACM 2.7 — Transparency

Explainability & Transparency Dashboard

Provides interpretable insights into how hiring decisions are made. Presents decision factors — keywords, experience weighting, inferred patterns — in a human-readable format for both recruiters and applicants.

  • Powered by SHAP and LIME feature attribution models
  • Built for non-technical recruiters and affected applicants
  • Aligns with GDPR Article 22 — right to explanation
  • Reduces informational asymmetry between organisations and applicants
03
ACM 2.5 — Evaluation Failure

Bias Mitigation & Model Correction Module

Actively reduces identified biases by adjusting model behaviour through dataset rebalancing, fairness-aware reweighting, and controlled retraining. Enables organisations to correct discriminatory patterns before deployment.

  • Dataset rebalancing and fairness-aware reweighting
  • Controlled retraining with bias constraints
  • Incremental integration — no full pipeline replacement required
  • Embeds ethical evaluation into the development lifecycle
04
ACM 1.2 — Avoid Harm

Stakeholder Feedback & Accountability Interface

Enables affected individuals — particularly applicants — to request explanations, flag concerns, and challenge automated decisions. Introduces a human-in-the-loop mechanism for recruiters and compliance officers.

  • Applicant-facing recourse and challenge mechanism
  • Human-in-the-loop review for contested outcomes
  • Shifts design from organisation-only to equitable socio-technical system
  • Restores trust and accountability in automated decision-making

Built for Everyone Affected

FairSight recognises that different groups interact with AI systems in fundamentally different ways and possess varying levels of power and awareness.

👥

Recruiters & Hiring Managers

Actionable insights into model behaviour, enabling informed decisions rather than blind reliance on algorithmic outputs. Integrates into existing workflows.

Primary Users
🔬

ML Engineers & Data Scientists

Diagnostic tools to identify, interpret, and correct bias at the model level. Embeds ethical evaluation directly into the development pipeline.

Technical Users
📋

Regulators & Compliance Teams

Audit reports and transparency mechanisms that translate technical findings into policy-relevant insights — no deep technical expertise required.

Regulatory
🙋‍♀️

Job Applicants

Fairer evaluation processes, explanations for decisions, and avenues for contesting outcomes. The most affected, now the most protected.

Affected Stakeholders

What Makes FairSight Different

1

Shifts Power Back to Applicants

Developer-facing toolkits like IBM AI Fairness 360 flag bias for the organisation. The applicant sees nothing. FairSight's Stakeholder Feedback Interface gives applicants a direct pathway to understand and dispute decisions. No comparable toolkit does this.

2

Explainability That Actually Reaches People

Most bias detection tools produce outputs readable only by ML engineers. FairSight's Explainability Dashboard is built for applicants and non-technical recruiters — fulfilling both GDPR Article 22 and ACM 2.7 requirements.

3

Built to Actually Be Used

Ethics tools that require pipeline overhauls rarely get adopted. FairSight integrates into existing CI/CD workflows, making ethical evaluation part of the development process rather than an additional obligation sitting outside it.

4

Traces Bias to Its Source

Amazon's engineers attempted fixes and the system found new ways to discriminate because source features were never identified. FairSight's detection engine traces disparities to specific training data features — the kind of comprehensive risk assessment ACM 2.5 requires.

Broader Implications

What happened at Amazon was not a one-off failure. The same conditions exist in nearly every sector where AI is used to make decisions about people's lives.

💼

Recruitment

Detect proxy discrimination in hiring datasets. Ensure candidates are evaluated on merit, not biased historical patterns.

🏦

Financial Lending

Audit lending algorithms to ensure credit is distributed based on true financial health rather than biased historical metrics.

🏥

Healthcare

Proactively evaluate patient management models to prevent the systemic underestimation of marginalised medical needs.

Criminal Justice

Continuously audit risk assessment tools to ensure predictive algorithms do not codify racial disparities under the guise of objective mathematics.

Classifies automated recruitment systems as high-risk AI, requiring conformity assessments, transparency documentation, and ongoing monitoring.

Establishes that individuals subject to automated decisions have the right to human review and the right to contest outcomes.

Computing professionals must provide thorough evaluations, foster fairness, and foster public awareness of how systems affect people.

Group T7_Le_Roi_Pem

FIT1055 IT Professional Practice and Ethics

RP
Rahul Vedant Pemsing
36305529
SN
Shirdisharan Neeraye
36418609
IM
Ibrahim Mohammed
36618640
KA
Kim Louis Alfatah
35264993
RS
Rishab Somaroo
36373850
AP
Aakash Paul
35960094
📋
Academic Disclaimer

FairSight is a conceptual product proposal created solely for academic purposes as part of a university assignment for FIT1055 IT Professional Practice and Ethics at Monash University. This website and the ideas presented within it are not intended to represent a real product, compete with any existing commercial platform, or solicit investment of any kind. All references to third-party tools, companies, and research are used for educational and analytical purposes only. No affiliation with Amazon, IBM, Google, or any other organisation mentioned is implied or intended.