"I included this case study because AI fascinates me, not just as a buzzword, but as a tool with real potential to shape how we live, decide, and interact. I have always learned best by doing, and when I am building something that matters. This project is my way of exploring AI not from the outside, but by stepping into it, researching how it works, questioning where it fits, and designing something that puts it to meaningful use. It’s equal parts curiosity, experimentation, and love for turning complexity into something human." 🌱

AI Ethics & Transparency

AI Tools

Explainable AI (XAI)

Human-Centered AI Design

Scalable UX Frameworks

Designing for Transparency in Biased AI Systems

Designing for Transparency in Biased AI Systems

Timeline

Ongoing

Client

My Inner Nerd

Role

UX/UI Designer

Overview

As AI becomes increasingly adopted to manage large-scale processes, it brings with it a crucial responsibility to design systems that prioritise fairness, transparency, and their impact on people. Today, numerous articles and news reports highlight the pervasive biases present in these AI systems.

With time, AI is becoming the widely adopted tool to manage high-scale processes but with that power comes the responsibility to design for fairness, clarity, and human impact.

This case study began from a shared frustration, one voiced by friends, family, and felt in my own life: being rejected by a system and not knowing why. Whether it was a denied loan application, a visa delay, or a service quietly withdrawn. We all have stories of being dismissed by a system, often powered by AI, with no clear explanation. The answer wasn’t always “no.” Sometimes, it wasn’t an answer at all.

As I explored the design problem, I found two intertwined issues:

  1. AI systems don’t communicate rejection well.
    Often, users don’t get a definitive response. Systems reroute them, go silent, or show vague failure messages. In emotionally sensitive or high-stakes scenarios like banking, housing, or travel, the result is confusion, mistrust, and a lack of closure.

  2. Some rejections stem from biased or opaque decision-making.
    AI is trained on historical data and history often contains human bias. Users from certain regions, income levels, or demographics may be disproportionately rejected. But without visibility into why, these patterns go unnoticed and unchallenged.

    This case study explores both problems and how we can design better rejection experiences in AI-powered systems.

With time, AI is becoming the widely adopted tool to manage high-scale processes but with that power comes the responsibility to design for fairness, clarity, and human impact.


This case study began from a shared frustration, one voiced by friends, family, and felt in my own life: being rejected by a system and not knowing why.

Whether it was a denied loan application, a visa delay, or a service quietly withdrawn. We all have stories of being dismissed by a system, often powered by AI, with no clear explanation. The answer wasn’t always “no.” Sometimes, it wasn’t an answer at all.

As I explored the design problem, I found two intertwined issues:

  1. AI systems don’t communicate rejection well.
    Often, users don’t get a definitive response. Systems reroute them, go silent, or show vague failure messages. In emotionally sensitive or high-stakes scenarios like banking, housing, or travel, the result is confusion, mistrust, and a lack of closure.

  2. Some rejections stem from biased or opaque decision-making.
    AI is trained on historical data and history often contains human bias. Users from certain regions, income levels, or demographics may be disproportionately rejected. But without visibility into why, these patterns go unnoticed and unchallenged.

This case study explores both problems and how we can design better rejection experiences in AI-powered systems.

As I explored the design problem, I found two intertwined issues:

  1. AI systems don’t communicate rejection well.
    Often, users don’t get a definitive response. Systems reroute them, go silent, or show vague failure messages. In emotionally sensitive or high-stakes scenarios like banking, housing, or travel, the result is confusion, mistrust, and a lack of closure.

  2. Some rejections stem from biased or opaque decision-making.
    AI is trained on historical data and history often contains human bias. Users from certain regions, income levels, or demographics may be disproportionately rejected. But without visibility into why, these patterns go unnoticed and unchallenged.

This case study explores both problems and how we can design better rejection experiences in AI-powered systems.

With time, AI is becoming the widely adopted tool to manage high-scale processes but with that power comes the responsibility to design for fairness, clarity, and human impact.

This case study began from a shared frustration, one voiced by friends, family, and felt in my own life: being rejected by a system and not knowing why. Whether it was a denied loan application, a visa delay, or a service quietly withdrawn. We all have stories of being dismissed by a system, often powered by AI, with no clear explanation. The answer wasn’t always “no.” Sometimes, it wasn’t an answer at all.

As I explored the design problem, I found two intertwined issues:

  1. AI systems don’t communicate rejection well.
    Often, users don’t get a definitive response. Systems reroute them, go silent, or show vague failure messages. In emotionally sensitive or high-stakes scenarios like banking, housing, or travel, the result is confusion, mistrust, and a lack of closure.

  2. Some rejections stem from biased or opaque decision-making.
    AI is trained on historical data and history often contains human bias. Users from certain regions, income levels, or demographics may be disproportionately rejected. But without visibility into why, these patterns go unnoticed and unchallenged.

    This case study explores both problems and how we can design better rejection experiences in AI-powered systems.

With time, AI is becoming the widely adopted tool to manage high-scale processes but with that power comes the responsibility to design for fairness, clarity, and human impact.


This case study began from a shared frustration, one voiced by friends, family, and felt in my own life: being rejected by a system and not knowing why.

Whether it was a denied loan application, a visa delay, or a service quietly withdrawn. We all have stories of being dismissed by a system, often powered by AI, with no clear explanation. The answer wasn’t always “no.” Sometimes, it wasn’t an answer at all.

As I explored the design problem, I found two intertwined issues:

  1. AI systems don’t communicate rejection well.
    Often, users don’t get a definitive response. Systems reroute them, go silent, or show vague failure messages. In emotionally sensitive or high-stakes scenarios like banking, housing, or travel, the result is confusion, mistrust, and a lack of closure.

  2. Some rejections stem from biased or opaque decision-making.
    AI is trained on historical data and history often contains human bias. Users from certain regions, income levels, or demographics may be disproportionately rejected. But without visibility into why, these patterns go unnoticed and unchallenged.

This case study explores both problems and how we can design better rejection experiences in AI-powered systems.

01 - Problem Hypothesis

01 - Problem Hypothesis

01 - Problem Hypothesis

Users interacting with AI-powered systems often encounter one of two experiences:
A rejection that feels arbitrary or worse, no clear response at all. In sectors like banking, cloud infrastructure, healthcare, or public services, users are often:

User Problem

Denied loans, services, or access

Left without explanation

Provided with next steps that don’t reflect their real situation

This leads to:

Mistrust in the system

Decreased user engagement

Lack of learning opportunity for the user to improve future outcomes

Missed feedback loops to detect and mitigate algorithmic bias

Design challenge

How might we design transparent, respectful rejection experiences in AI-powered systems, ones that:
- Help users understand why they were rejected,
- Maintain emotional safety and dignity,
- Surface patterns of bias or uncertainty,
- And offer actionable next steps without compromising system integrity or trust?

02 - Research

02 - Research

02 - Research

AI Systems Research - Ongoing

To design this project, I am currently researching key AI concepts including:
Machine Learning basics: Understanding how models are trained and make decisions.
Explainable AI (XAI): Tools like SHAP and LIME that help reveal why AI makes certain predictions.
Bias & Fairness: How historical data causes bias, and methods to detect and reduce it.
Ethics & Regulations: GDPR’s Right to Explanation and frameworks for responsible AI.
Human-AI Interaction: Research on how users perceive AI decisions and the importance of clear, empathetic explanations.

Note: If you’re a professional, I hope you can empathise with how hard it is to write case studies and upgrade new knowledge simultaneously - like, really hard! I am working on them right now, and they will be uploaded by September 15th. Stay tuned, and please send me good vibes while I wrestle with them!

Funny GIF from Giphy

Enabling what truly matters in a world full of noise

Enabling what truly matters in a world full of noise

Enabling what truly matters in a world full of noise