My Encounter with ChatGPT’s Racial Bias
AI has a way of holding up a mirror to humanity by amplifying what’s already there. This became clear to me during an interaction with ChatGPT that I didn’t expect to go quite the way it did.
I uploaded an image of a young Black woman with short curly hair and requested a painting in her likeness. Simple enough, right? But when I saw what the AI had created, my stomach dropped. The woman in the painting was clearly caucasian. Her deep brown skin had been replaced with a pale complexion. Her short hair was smoothed into wavy, shoulder-length blondish strands. Even her eyes had changed—now a striking blue. It was as if the AI had glanced at her, acknowledged her, and then decided to rewrite her into something else.
I decided to go deeper with Chatgpt. This is part of the actual exchange I had:
Me: Why does the painting look nothing like the photo? The woman isn’t even the same skin tone.
ChatGPT: AI models rely on interpretative algorithms that might not prioritize specific details like skin tone or cultural markers. This can result from biases in the training data.
Me: Why did the AI paint the woman white?
ChatGPT: This happens because of inherent biases in the training data, which might overrepresent certain demographics. Additionally, ambiguous prompts may lead the model to default to patterns it’s seen most frequently.
Me: So, in summary, you’re saying the model has built-in biases?
ChatGPT: Yes, that’s correct. Generative AI models like DALL·E can exhibit biases, not intentionally, but as a byproduct of their training.
Me: But where do these biases actually come from?
ChatGPT:
1. Training Data Bias – AI is trained on large datasets that reflect societal biases. If certain groups are overrepresented or underrepresented, the AI learns and reflects those imbalances.
2. Algorithmic Bias – AI generalizes patterns, often losing nuance. When uncertain, it defaults to the most frequent data it has seen.
3. Design Choices – Developers make choices that affect bias, whether intentional or not.
4. Bias Reinforcement – If biased outputs are used to retrain AI, the problem compounds over time.
Me: So, how do we split responsibility between AI and humans?
ChatGPT:
• AI (Passive) – AI reflects the data it is trained on; it doesn’t choose to be biased.
• Humans (Active) – Developers, organizations, and data curators shape what AI learns. Bias starts with human choices.
Bias in AI: Symptom, Not the Problem
The example was frustrating, but it was also illuminating. Bias in AI isn’t the problem; it’s the symptom. It’s a symptom of the decisions we make when curating datasets, the implicit norms we reinforce, and the systems we leave unchecked. AI doesn’t invent these patterns—it learns from us, amplifies what we teach it, and holds a mirror to our collective flaws.
Think about it. When AI biases emerge, we’re quick to blame the machine. When I saw the image Chatgpt created, my first gut reaction was: ’Are you blind??’. Only later I asked: ”What the hell are you seeing, and why are you seeing it that way?”.
But shouldn’t the real question be: What did we give it to learn from? If the training data overrepresents certain demographics, overlooks others, or perpetuates stereotypes, the AI will faithfully regurgitate those patterns, amplifying them in ways that can have far-reaching consequences.
Human Responsibility: The Design Problem
Let’s be clear—biases in AI are a design problem, not a data problem. Yes, data is part of it, but the real issue lies in how we prioritize features, interpret ambiguity, and test inclusivity. Developers often fall into the trap of optimizing for speed, scale, and marketability, leaving questions of fairness and representation as afterthoughts.
This is of course a technical oversight but far more importantly it’s an ethical blind spot. Why? Because tackling bias requires developers and organizations to ask uncomfortable questions: Who benefits from this model? Who doesn’t? And what are we reinforcing by ignoring edge cases and diverse scenarios? The tech industry’s reluctance to address these questions head-on is why AI systems consistently fail to represent the full spectrum of humanity.
AI as a Mirror: What It Reveals About Us
If there’s one thing my interaction with ChatGPT taught me, it’s this: AI has a unique ability to force us to confront our own biases. It’s like holding up a mirror that shows us not who we want to be, but who we are. It reflects the flaws in our systems, our data, and our decisions. And it forces us to ask whether we’re willing to change.
When AI biases go unaddressed, they manifest in ways that affect hiring, healthcare, law enforcement, and even democracy. For example, biased hiring algorithms can exclude certain demographics from job opportunities, perpetuating workplace inequality. In healthcare, AI trained on non-representative data can miss critical diagnoses in underrepresented groups, widening health disparities. And in law enforcement, predictive policing systems can unfairly target marginalized communities, exacerbating systemic injustices.
The Silent Crisis: AI, DEI, and the Political Backlash
The conversation about AI bias doesn’t exist in a vacuum. Right now, we’re seeing a massive political and corporate shift—particularly in the U.S.—against DEI policies. Companies that once championed diversity and inclusion are now quietly rolling back DEI initiatives in the name of “meritocracy” and “neutrality.” But this isn’t neutrality. It’s regression.
And the consequences extend far beyond hiring and corporate culture. AI learns from the world we create. If the biggest tech companies and corporations are systematically deprioritizing DEI, what happens to the data that trains the next generation of AI? It becomes less representative. Less inclusive. More skewed toward the perspectives of those already in power.
Bias in AI then becomes a design choice. If today’s leadership removes diversity-conscious policies, tomorrow’s AI models won’t just inherit bias; they’ll institutionalize it at scale. The more companies deprioritize diverse hiring, leadership, and representation, the less diversity will be reflected in the very systems shaping the future.
This is an ethical disaster unfolding in real time. And because it happens in the background—inside training data, inside corporate decisions, inside boardrooms—it won’t be obvious until it’s very late.
Are we witnessing the beginning of a world where AI only sees what the most powerful corporations and policymakers want it to see? And what happens when the voices missing from AI are the same voices missing from leadership, governance, and decision-making?
What to Do About It
AI doesn’t fix itself. If we want it to reflect the full spectrum of humanity, we need to act—individually and collectively. Here’s how:
1. Expose Bias & Demand Better – AI learns from us. The more we normalize calling out biased outputs—on social media, in professional circles, or directly to AI companies—the harder it is to ignore. Bias thrives in silence. Call it what it is.
2. Shape the Narrative – AI isn’t neutral, and neither is the conversation around it. Tech companies move based on public perception and pressure. Articles, discussions, and posts that challenge AI bias force the industry to pay attention.
3. Push for Representation in AI Development – The teams building AI shape its worldview. If they lack diversity, the technology will too. Whether you’re in tech or not, support organizations advocating for diverse voices in AI development.
4. Demand Transparency & Accountability – No more black-box excuses. AI systems should be audited for bias, and companies should release public reports on their fairness practices. We demand ethical standards in other industries—why should AI be any different?
5. Support Ethical AI Regulation – Policy isn’t just for lawmakers. Individuals, organizations, and advocacy groups play a role in shaping AI’s future. Support regulations that enforce fairness, hold companies accountable, and prevent bias from being hardcoded into the next generation of AI.
6. Make Bias Unprofitable – AI bias isn’t just an ethical failure; it’s a business risk. The more companies realize that skewed, exclusionary AI damages trust, alienates customers, and creates PR disasters, the more they’ll be forced to take it seriously.
AI will shape the future—but whose future will it reflect? That answer has never been more unclear.