Wednesday, January 14, 2026
Erin Beattie:
We aren’t going back.
AI is already woven into our work, systems, platforms, and daily routines. And while there is no stopping the momentum, there’s still room to shape what comes next.
The question isn’t whether AI will change our lives; it already has. The better question is: what do we want to protect as we move forward?
Across this series, I’ve written about the pressure to adopt AI tools quickly, the productivity promises, the marketing spin, and the fear of falling behind, but behind all that noise is something quieter and more human: the need for trust.
Trust in how tools are built. Trust in how decisions are made. Trust that people, not just outputs, still matter. That trust can’t be generated by code; it has to be earned.
We earn it by slowing down where it counts; by naming risks early, being honest about what AI can’t do, and by refusing to sacrifice clarity, consent, or care in the name of speed.
In every field I’ve worked in, the public sector, higher education/post-secondary, health care, and technology, the same themes come up when systems fail: people feel erased, left out, and disconnected from the decision-making process.
AI has the potential to amplify that distance, or to close it, but only if we treat communication as a strategy, not a nice-to-have. Only if we root that communication in values. In my work across sectors, public service, higher education/post-secondary, health care, and technology, the same pattern emerges when systems fail: people feel erased, ignored, and disconnected. That’s why presence matters, not just process.
As Chad Flinn, Horizon Collective and TeachPod Consulting, shared from the classroom: “AI can’t look me in the eyes and show me understanding or pain. It can’t sit with a student who stays after class to share that their partner has just been diagnosed with cancer, and it can’t hold the silence in that moment with me.”
We don’t protect equity just through policies; we do it through those quiet, irreplaceable moments of human connection.
Key takeaways from this series:
These are the anchors worth holding onto as AI continues to evolve.
There will always be new tools, models, and marketing, but that doesn’t mean we hand over our voice, our ethics, or our imagination.
It means staying engaged: speaking up when something doesn’t feel right, building policies that reflect consent and care, and making room for the people who ask better questions.
As Tim Carson, RSE, MA, Trades Educator, reminds us, the irreplaceable element is inspiration: “AI may help with the creative process, but it cannot replicate the tangible, yet unexplainable, ingredient of inspiration. I believe we are spiritual creatures, and as such, inspiration is that magic ingredient providing buoyancy to the act of being creative. Perhaps inspiration is the defining quality that AI just cannot reproduce.”
And as Flinn reflected from his teaching practice, the heart of it is connection: “That spark of connection, the thing that makes stories, art, teaching, and collaboration come alive, will always be human.”
If there is one thing I hope this series has made clear, it’s this: the future of AI is not inevitable. It isn’t happening to us, it’s happening with us. That choice, that agency, is the human side of AI.
The human side of AI isn’t just what we protect; it’s how we choose to show up, together.
+++
Erin Beattie, Founder and CCO, Engage and Empower Consulting
UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence
World Economic Forum. (2025). Public AI Infrastructure: What is it, do we need it and will it ever be built? A media leader explains
Mozilla Foundation. (2023). Trustworthy AI
Center for Humane Technology. (2022). Foundations of Humane Technology
Written by: Editor
© 2026 Stratpair Ltd., trading as Strategic. Registered in Ireland: 747736