By Rebecca Gonzales, Strategic Advisor for Compliance and Ethics in R&D&I
It’s 2025 and AI is everywhere. The technology has grown rampantly with society adopting generative AI at a rate faster than any technology we have seen in our lifetime. Uses of generative AI range from the mundane and expected like writing email and enhancing productivity, to the surprising, mental health and dating.
The same is true in business. In McKinsey’s State of AI report released this year, 78% of enterprise organizations stated they are using AI in at least one business function and 71% report using generative AI specifically in one business function. With this growth in adoption comes concern. Every year a large number of AI incidents adversely affect customers of companies using these AI systems, mortgages are denied, healthcare is withheld erroneously, and job applicants are thrown out of consideration due to their age. All things that were not meant to happen in the race to be more efficient, drive innovation, and increase the bottom line. When an organization suffers from an AI incident they risk loss of trust from customers and employees which can lead to adverse business outcomes and even legal action.
Over the past 20 years, there has been a call from researchers, scholars, academics, and ethical practitioners to examine the uses of AI and strive for a better, safer way to deploy the technology. Many companies have established AI principles, formed AI governance boards, and created leadership positions to attempt to design, develop, and deploy this technology in a way that does not introduce any more bias or harm into society. But still these errors persist. The AI Incident Database, an online repository that aims to index the collective history of AI harms, has more than 3000 incidents documented.
For responsible AI to succeed, all parts of an organization, not just the technologists, must participate in the AI lifecycle. We cannot place all the responsibility on the data scientists, AI/ML engineers, or product managers. It takes all levels of leaders, both technical and non-technical, to provide the checks and balances that lead to good AI deployment.
But where can you start?
- Learn the basics of AI. There are hundreds of both free and low-cost courses to teach you the basics of AI.
- Talk to your team. If you don’t know how your company uses AI, find out who is in charge of AI use cases and book time with them to review. A recent survey found that only 45% of employees consider their company’s AI adoption successful over the past year.
- Learn about common responsible AI principles and find out how your organization uses them. The Organisation for Economic Co-operation and Development (OECD) developed AI principles that many organizations use as the basis for their AI strategies. They are:
- Inclusive growth
- Sustainable development and well-being
- Human rights and democratic values, including fairness and privacy
- Transparency and explainability
- Robustness, security and safety
- Accountability
- Review how your organization aligns with these principles and help operationalize them. For example, regarding transparency and explainability, you can work with your technical counterparts to have them explain the models your organization uses and how they derive their outputs. If the technical team can’t explain them to you in plain language, they are not meeting the basics of transparency. Use your communications and storytelling skills to help them use language everyone can understand.
In summary, communications professionals play a key role in advancing responsible AI because:
- We translate complexity into clarity – helping internal and external audiences understand how AI is used and why it matters.
- We shape narratives and build trust – ensuring transparency and accountability are embedded in how AI initiatives are communicated.
- We connect dots across teams – serving as bridges between technical teams, leadership, and the public, making sure the human impact of AI isn’t lost in the hype.
In this era of AI, it’s not just about what the technology can do, but what we choose to do with it. Let’s make sure that includes responsibility and impact.