Abstract image resembling a blue water drop on an abstract light grey pattern

Power, bias, and the data we don’t see

Reading Time: 4 minutes

Erin Beattie:

Who gets seen by the algorithm? Who gets erased? Who gets reduced to a checkbox?

These aren’t abstract questions; they’re built into the systems shaping everything from job applications to legal decisions to healthcare diagnostics.

AI is trained on data, but data isn’t neutral; it reflects history, policy, power, and inequity. It reflects the gaps in our archives and the bias in our systems. And when that data is used to automate decisions, those patterns don’t disappear. They scale.

What gets measured gets repeated.
What gets missed often stays invisible.

Representation is not inclusion

Many AI tools claim to work for “everyone”, but who gets to define that?

Representation and inclusion aren’t the same. Representation means you can be seen in the data. Inclusion means your perspective actually shapes what the data does and how it gets used. That’s where human judgment comes in. AI can surface options, but only people decide which voices are heard and what choices are acted on. As Chad Flinn, Horizon Collective and TeachPod Consulting, put it: “AI is part of the toolbox… It’s a brilliant partner for ideation, for sparking creativity, and for seeing possibilities I might have missed on my own. But at the end of the day, that is all it is. A tool.”

In reality, women, racialized communities, disabled people, queer and trans folks, and others who don’t fit the dominant norm are often underrepresented or misrepresented in the training data. When this happens, the resulting tools might not recognize their language, interpret their speech patterns accurately, or assess their qualifications fairly.

This has real-world consequences.

Hiring algorithms have penalized applicants with “non-traditional” education or experience. Facial recognition software has shown significant accuracy gaps when analyzing darker skin tones. Translation tools have defaulted to gendered assumptions about certain roles or professions. These are not bugs. They are design choices built on incomplete data.

And they reflect a bigger issue: whose stories are considered valid in the first place.

Being cautious is not falling behind

I was recently quoted in The Globe and Mail in a piece exploring why many women are hesitant to embrace AI at work. The headline framed it as a potential “career killer,” but I see it differently.

As I shared in the article, “Many women and equity-seeking professionals are not lagging behind; we’re pausing long enough to ask better questions.”

That pause is not a weakness; it’s wisdom.

And it’s not just about equity in data; it’s about survival in a rapidly reshaped workforce.

Brad Marley captured this in a vulnerable and honest way on LinkedIn: Is it okay to say that I’m sick and tired of reading about AI? I think it’s giving companies permission to reduce marketing and communications staff because they think ChatGPT can do it all. I may or may not lie in bed at night worrying about how I’m going to stay relevant.”

That fear is real. And it isn’t rooted in a fear of change. It is rooted in a fear of being discarded too quickly, too easily, and too impersonally.

Marley went on to say that he would rather see AI focused on critical areas like cancer screening and diagnostic speed, places where it can support life-saving work, not replace skilled professionals who bring context and care.

I keep coming back to that. The question isn’t whether AI can do something; it’s whether it should, and whether the outcome truly serves people, or just cuts costs.

What happens when we mistake prediction for truth?

Predictive models aren’t divine; they’re guesses based on past patterns. They lack context, they don’t understand nuance, and they certainly don’t understand intent.

This is where leadership matters. What we choose to automate, and how we phrase the question, changes the outcome. Pausing to ask not just can we, but should we, is where responsible leadership shows up. As Flinn shared: AI can create outputs, but it doesn’t know what it means to care, to feel, to struggle. Those are the human dimensions that drive equity, trust, and meaningful decision-making.

If we are using AI to filter resumes, evaluate performance, or decide who gets access to resources, we need to ask who trained the system and whose outcomes it was optimized for. We need to ask what is being valued and what is being dismissed.

In leadership, bias is not always loud. Sometimes it looks like assumptions about tone, gaps in visibility, or someone being told they are “not quite the right fit,” even when their qualifications are clear.

When those patterns are baked into an algorithm, they don’t go away; rather, they become harder to see and challenge.

Designing for equity means naming what is missing

If we want AI to work for more people, we have to ask better questions at the design stage. We have to involve the people most impacted. And we have to treat inclusion as a practice, not a checkbox.

That means:

  • Building representative and consent-based datasets

  • Auditing outcomes and error rates across identity groups

  • Involving people from underrepresented communities in every phase of development

  • Making the limits of the tools visible, not hidden

Equity is not the same as access. Access without agency is not enough.

The human side of AI is collective

AI isn’t a force of nature; it’s made by people, and that means it can be remade.

The systems we build reflect the values we bring to them. So let’s bring care, rigour, curiosity, and accountability. Let’s pause long enough to ask better questions. And let’s make sure the future of work doesn’t repeat the exclusions of the past.

Inclusion isn’t just about who gets in the room; it’s about who gets heard once they’re there.


This post is part of The Human Side of AI. Explore more insights on creativity, ethics, sustainability, and how AI is reshaping our world by reading the whole series.


References

Written by: Editor

Leave a Reply

Follow by Email
LinkedIn
Share