Statue of woman with scales of justice

Why Ethics In AI for Communications Matter More Than Ever

Reading Time: 6 minutes

by Rosemary Sweig:

We wouldn’t be human if we weren’t a little distrustful, even afraid of what AI means for the communications field and even for humanity itself. We’ve all seen the science fiction movies like Terminator, Ex Machina, and I, Robot, where robots take over and we live in a dreadful dystopia.

AI is powerful, transformational, and full of potential. It may even have the ability to cure all of humanity’s ills, diseases, poverty, hunger, and war. In fact, AI guru Mo Gawdat, former Chief Business Officer at Google X and author of the bestselling book Scary Smart, suggests that AI could help humans achieve Utopia. However, that’s a topic for another day.

The Risks of AI

But AI is filled with risks. That’s why ethics must become central to all AI adoption, especially in communications, where trust and reputation are everything.

AI itself is neither good nor bad; it simply does what it is programmed to do. Strong ethical guardrails are essential. Without them, AI can unintentionally spread misinformation, reinforce bias, or erode trust. Ethics is no longer a side conversation. It is a central pillar of leadership. Let’s break it down in clear, practical terms.

AI Can Help or Hurt…It Depends on How We Use It

When used correctly, AI can help communicators move faster and work smarter. However, if misused, it can cause real damage. Here are some important considerations:
  • Misinformation: AI can generate text that appears accurate but is actually incorrect. If no one checks it, that false information can spread quickly.

  • Privacy issues: AI tools may track people’s online activities to show ads about private matters, such as health. That kind of targeting can feel intrusive.

  • Bias: AI may write job ads that repeat old stereotypes, discouraging applicants, especially women or minority candidates.

  • Lack of human review: In urgent situations, AI might publish messages without human approval. If the message is incorrect or unclear, it can lead to confusion or harm

These issues are not future concerns; they are already happening. That’s why strong ethical practices are essential.

Three Real-World AI Failures That Underscore the Need for Ethics

When Hiring Bias Hides in Code: Amazon’s Misstep

Algorithmic hiring might seem like a smart shortcut, but it can quietly reinforce the very biases we hope to eliminate.

For instance, Amazon built an AI recruiting tool in 2014 to streamline its hiring process. The idea was ambitious: let AI select the best candidates based on historical resume data. But the data came mostly from resumes submitted by men over ten years.

The AI learned to favor male candidates. It even downgraded resumes containing the word “women’s,” like “women’s chess club captain.” When this came to light, Amazon shut the program down.

This case is a stark reminder that AI doesn’t just mirror our thinking; it magnifies it. If we feed it biased history, we get biased futures. Factors like postal codes, hobbies, or club memberships can signal gender, race, or income, leading the system to screen them out. This isn’t innovation; it’s discrimination.

Now, cities like New York and countries across the EU are labeling AI hiring tools as “high risk,” requiring stricter oversight. That’s the right move for ensuring ethical AI use.

When AI Hits Your Bottom Line: Zillow and Air Canada

AI doesn’t just create theoretical issues; it can also cost real money. Just ask Zillow. In 2021, they leaned heavily into an algorithm to predict home prices, but it ended up overestimating significantly.

They bought too high and sold too low, leading to the shutdown of the entire program and the layoff of a quarter of their staff. This was not just a glitch; it was a business model broken by blind trust in AI.

Similarly, Air Canada faced issues when a customer inquired about bereavement fares. Their chatbot promised a refund after booking, which was not their actual policy. The customer took them to court…and won. The judge ruled that Air Canada was still responsible for what their chatbot stated.

These are not just tech failures; they’re human oversight failures. If no one is checking what AI is learning or saying, the costs can skyrocket—money lost, trust eroded, and customers lost.

AI in the Exam Room: IBM’s Watson for Oncology

IBM’s Watson for Oncology was designed to be groundbreaking—an AI tool to help doctors deliver personalized cancer treatment. However, in 2018, the system was criticized for recommending unsafe or inaccurate treatments.

Watson had been trained largely on synthetic data rather than real patient cases, leading to unreliable outputs. Ultimately, IBM discontinued the project.

This serves as a powerful reminder that AI, regardless of sophistication, cannot replace human judgment, especially in complex areas like healthcare.

And it’s a cautionary tale for communicators, too. AI may generate ideas or content at incredible speed, but if no one checks its assumptions, we risk disseminating misinformation that appears credible. Ethics isn’t merely a tech issue; it’s a leadership responsibility.

Case summaries adapted from research published by Harvard University’s Center for Ethics. Source.

What Do Ethics in AI Communications Really Mean?

Ethics means using AI carefully and honestly. It involves prioritizing truth and openness over speed or convenience. AI can assist, but it shouldn’t replace human judgment. Ultimately, people are still accountable for what is communicated.

Here’s how to put ethics into action:

  • Be honest: Let people know if AI contributed to the message. Even a small note like “Created with AI and reviewed by a person” makes a difference.

  • Always review: Ensure human oversight of AI-generated content before it is published. Machines are not infallible.

  • Avoid manipulation: AI can create emotionally charged messages, but using that for manipulation, especially after crises, is unethical.

  • Ensure fairness: Use tools to identify bias and create accessible content. Solicit feedback from diverse audiences to ensure your message resonates with everyone.

Companies Putting Ethics into Practice

Ethical leadership is not theoretical; it is operational. Various organizations are already showcasing what responsible AI usage looks like. These examples provide not just inspiration, but also models for communications professionals committed to clarity, accountability, and trust.

  • IBM has established an AI Ethics Board and clear rules concerning fairness and transparency. The company actively engages in public discussions about trustworthy AI and shares its principles for responsible use. (IBM Trust & Transparency)
  • Microsoft promotes responsible AI through company-wide standards, including dedicated roles, guiding principles, and oversight mechanisms. Their framework is designed to empower people, not replace them. (Microsoft AI Principles)

  • Salesforce has created an Office of Ethical and Humane Use and introduced public guidelines for ethical technology. Their focus on transparency and stakeholder inclusion sets a robust model for brands employing AI in customer and employee communications. (Salesforce Ethics Principles)\

A Model for Responsible AI in Communications

One standout example of thoughtful leadership in this space comes from the Centre for Strategic Communication Excellence. Their guide offers a comprehensive ethics framework based on global standards and professional values. It’s a practical roadmap for communication professionals utilizing AI with clarity and care. Read the full guide here. Centre for Strategic Communication Excellence.

Final Thoughts: Why Ethics Gives You an Edge

Communicators are more than just messengers. They must lead with values. Being ethical is not a burden; it’s a strength. Brands that understand this principle will cultivate trust and foster loyalty.

+++
Rosemary Sweig is the founder of CommsPro and CommsPro HQ, where she helps senior communications leaders manage the shift to AI. With decades of experience as both a corporate executive and trusted consultant, she guides organizations in using AI wisely, strategically, and without losing the human voice that builds trust.

Learn more at www.commspro.ca.

Communication Leadership Summit, Brussels, 19 September

Written by: Editor

Leave a Reply

Follow by Email
LinkedIn
Share