AI ethics in the news: Spring 2024 roundup

In Spring 2023, the IoIC are committed to helping its members navigate the new AI landscape at work.

28 Mar 2024
by Cathryn Barnard

Our whitepaper AI and the future of internal communication outlined our pledge to help members not simply learn how to use new AI tools for themselves, but also better understand some of the ethical implications arising from this new era of synthetic media.

In the past year, AI has rarely been out of the news. On a daily basis, most of us will have seen a news article or opinion piece highlighting the myriad ways in which AI will transform life and work.

Generative AI in particular will reshape communication at work. While on the one hand, AI has the potential to deliver ever more customised communication to its intended recipients in a way that meets their diverse and bespoke needs, on the other, the way AI is leveraged at work needs ongoing care, attention and discernment. It needs ethical oversight.

As part of their professional remit, internal communicators understand the foundational role communication plays in trust and relationship building, engagement, connection, community and wider wellbeing.

It’s crucial, therefore, that the integration of AI into workplace practices does not compromise authentic communication and the trust and goodwill that emanates from it.

Internal communicators have a key role to play in organisational debate about the safe and ethical adoption of AI at work. They should be involved in any taskforces convened to establish AI governance.

This, of course, requires ongoing appraisal of developments in the field of AI ethics. While the scope of ethics in AI is vast, we have been pleased to note several points of progress in this area in recent months.

The Paris Charter on AI in Journalism

In November 2023, a new international charter was announced. The Paris Charter on AI in Journalism was signed by 17 partner organisations representing journalists, newsrooms and media outlets from across the world. The charter sets out a number of foundational ethical principles to protect the integrity of news and information in the wake of AI advances.

The Charter aims to preserve journalism’s role in the delivery of highest-quality, authentic journalism that continues to best serve society and uphold human rights.

Recognising the potential for AI to transform the global information landscape, those signing up to the Charter have prioritised human agency, transparency, accountability, traceability and more.

Due to the obvious intersections between journalism and internal communication, this Charter has implications for our profession. We are now working on the first phase of a framework that will address similar priorities and ringfence the ongoing criticality of the internal communication profession moving forward.

UK regulation of AI

Following a whitepaper consultation in 2023, in February 2024, the UK Government announced a non-statutory, cross-sector framework for the regulation of AI.

While there are no immediate plans to codify this framework into law, the principles it sets out are nonetheless important. They set the scene for deeper ethical consideration by all organisations considering the integration of AI into ‘business-as-usual’ activities.

Five principles have been identified as follows:


Safety, security and robustness:              AI tools and systems should function in a secure and safe way so that all risks arising can be overseen and managed accordingly


Transparency and explainability:             All AI systems should be appropriately transparent and explainable


Fairness:                                                        No AI tool or system should compromise the legal rights of individuals or organisations, discriminate against individuals or create unfair market outcomes


Accountability and governance:              Governance measures should ensure effective oversight of the supply of AI tools and systems, with clear lines of accountability


Contestability and redress:                       Users and affected third parties in the AI lifecycle should be able to contest an AI outcome that is harmful or creates risk of harm


As organisations continue to work out the ethical parameters of AI adoption, these five principles are a useful guide for internal communicators considering the use of Generative AI in their workstreams.

EU regulation of AI

In addition, in March 2024, the European Parliament approved its first framework to regulate AI and limit societal risks associated with mainstream adoption.

It has centred its framework around risk to human rights and identified priority areas for safeguarding. These include critical infrastructure, education, healthcare, law enforcement, migration and electoral processes.

In parallel, its legislation requires producers of Generative AI tools to demonstrate greater levels of transparency regarding their source material and to respect EU copyright laws.

As various jurisdictions across the globe endeavour to legislate for the safe and ethical adoption of AI, it’s clear consumer and citizen trust remains a key concern. In the digital age, we need to be able to trust the integrity of the tools we engage with. While regulation of AI remains a moving target, developments in this area serve as a helpful barometer for any internal communicator navigating the ethical considerations of adoption on behalf of their employer.

The impact of technology on worker quality of life

Finally, we were interested in the latest findings of the UK-based Institute for the Future of Work. Based on inputs from 5,000 UK workers, it has undertaken first-of-its-kind analysis of the impact of various technology types on the wellbeing of people at work.

Given the broad range of digital technologies increasingly in use at work today, its findings are unsurprisingly varied. Nonetheless, it has identified a distinct difference between wellbeing arising from exposure to mainstream smart devices and anxiety arising from exposure to wearable devices and other technologies that may be construed as more intrusive.

As internal communicators, we recognise how colleagues feel about their work is a key indicator of engagement and performance. Understanding the impact of emergent technologies on wellbeing is therefore critical to the work we do. We look forward to further findings of research in this area.


Truth has always been a key pillar of a healthy society. Making sure AI is adopted in a fashion that is equitable, inclusive, unbiased, representative and authentic is a primary challenge for any responsible citizen. We will continue to track key developments in the AI ethics arena. Stay tuned.

Further recommended reading


Related topics