Workplaces are increasingly digital, data-driven and metrics-oriented. The pace of change is breathtaking and it’s an uphill struggle to find the time to make space for human connection, relationship, development and growth. Trust at work has never been more on a knife edge.
As engagement wanes, organisational performance and effectiveness are at risk. Against this backdrop, the work of internal communication to explore the human implications of digital disruption has never been more important. Our quarterly technology updates help internal communication professionals stay informed and think more expansively about ways they can deliver strategic value.
In May, Henley Business School surveyed over 4,000 UK workers and surfaced an interesting paradox in workplace AI adoption.
While 63% of workers now use AI tools and 56% feel optimistic about their potential, 61% simultaneously feel a sense of overwhelm at the breakneck pace of the technology’s advancement.
This research uncovers a workforce characterised by "cautious curiosity" rather than trepidation. Job displacement anxiety is less pronounced than perhaps expected, with 61% of workers unconcerned about AI replacing their roles. Instead, the primary frustrations centre on AI's propensity for errors (33%) and unreliable data (30%).
Current usage patterns show workers spend an average of 3.5 hours weekly with AI tools, primarily for research (35%), data analysis (33%), and content creation (32%). Looking ahead, 57% expect increased AI reliance within five years, with efficiency gains and productivity improvements driving this expectation.
However, significant barriers remain. Nearly half (49%) of organisations lack AI guidelines, and 60% of workers say they’d embrace AI more readily with proper training. The research suggests AI could potentially enable more flexible working arrangements, with 57% believing it could contribute to a four-day working week.
Ultimately, the timely survey reveals a compelling picture of UK workers who are simultaneously excited and uneasy about AI's workplace integration. The fact nearly two-thirds already use AI tools whilst feeling overwhelmed provides an opportunity for internal communicators to help bridge this optimism-overwhelm gap.
By facilitating the development of clear, jargon-free AI policies, encouraging the creation of accessible training resources, as well as holding space for open dialogue about AI's role in the workplace, internal communication can help colleagues graduate from cautious curiosity to more confident adoption patterns – whilst listening to and addressing any legitimate concerns they might hold.
Artificial intelligence is fundamentally transforming the workplace in myriad ways. It was astonishing to read that chief executives are increasingly deploying AI avatars to handle routine meetings, with CEOs from companies like Klarna and Zoom sending their digital doubles to earnings calls and company updates.
These AI counterparts can communicate in multiple languages and are being trained to make decisions on behalf of the humans they represent.
The impact extends far beyond the C-suite. Major employers like BT and IBM are leveraging AI to reduce workforce requirements. Since ChatGPT's launch in November 2022, UK entry-level positions have plummeted by 32%, according to Adzuna research. Graduate roles, apprenticeships and internships now represent just 25% of the job market, down from 28.9% in 2022.
Outside of the knowledge sector, AI-powered robotics are revolutionising operations in warehouses and manufacturing. Amazon deploys over 750,000 robots across its fulfilment centres, with machines like Sparrow picking individual items and Proteus navigating alongside human workers. It’s estimated these systems will have generated annual savings of £7.3 billion by 2030.
The transformation isn't solely about job displacement, however. Companies adopting AI often create new job categories requiring technical skills, with AI-proficient workers potentially commanding 56% higher salaries. Amazon has established 700 new job categories and trained 300,000 workers since 2020, shifting employees from repetitive tasks to robot oversight and maintenance roles.
The workplace of tomorrow will likely feature human-AI collaboration rather than wholesale replacement. Whilst AI excels at routine, data-driven tasks, uniquely human skills – such as creativity, ethical reasoning and enlightened leadership – aren’t going anywhere soon. Success will depend on reskilling initiatives, supportive policies and ensuring automation's benefits are distributed equitably, rather than concentrated solely among high-skilled workers.
That said, while AI innovations might be transforming working lives, its benefits mustn't come at expense of young or existing talent – they all have a vital role in the ecosystem shaping the future of work. Instead, AI's rapid evolution demands that internal stakeholders remain focused, aligned and vigilant when it comes to both the opportunities and risks it brings with it.
This is most effectively achieved when organisations commit to ongoing, equitable and inclusive dialogue to evaluate both the advantages and disadvantages of these emerging technologies. And, of course, the neutral stance of internal communication presents a unique opportunity to connect leadership with colleagues and help close any knowledge divides.
In addition to changing how we work, AI is fundamentally reshaping how organisations communicate with external audiences. Companies are increasingly leveraging AI to create personalised advertising campaigns, generate content at scale and automate customer interactions.
WPP, for instance, now spends £300 million annually on AI capabilities, producing everything from motion-capture celebrity endorsements to algorithm-generated creative content. Meanwhile, tech giants like Meta are developing tools that allow businesses to create and target entire advertising campaigns automatically, potentially circumnavigating the need for traditional creative agencies.
The shift extends beyond marketing into information discovery itself. With 80% of consumers now relying on AI summaries for their searches, companies are increasingly tuning into the need to optimise for ‘Generative Engine Optimisation’ –GEO – rather than traditional search engine visibility (or SEO). This means crafting content that AI systems can easily quote and reference, fundamentally changing how organisations present information to potential customers. As a result, a hybrid approach to website design is emerging, where AI optimises design for humans and machines alike, to ensure functionality and user-friendliness for both audiences, across different platforms.
These technological advances in external communication aren’t without significant reputational risks, however. A survey of over 100 international public affairs leaders identified AI misuse as one of the greatest threats to brand reputation. Companies face intense scrutiny for creating deepfakes, spreading misinformation, making biased decisions or deploying unethical AI applications that manipulate public perception.
Authenticity concerns also plague AI-generated communications, as touched on in a previous roundup. Industry professionals – as well as target audiences – increasingly recognise AI content as being slightly ‘off’, or overly glossy, idealised and slightly hyper-real, potentially undermining genuine brand connections. Companies risk losing their distinctive voice and creative edge if they come to over-rely on automated content generation.
The opportunities, whilst significant, centre primarily on efficiency and reach. AI enables rapid content creation, sophisticated campaign testing and democratised access to advanced marketing tools for smaller businesses. Early adopters positioning themselves as authoritative sources for AI systems may secure long-term competitive advantages in an increasingly AI-dominated information landscape.
However, AI, if not properly understood or managed in companies, can also have an irrevocably damaging trickle-down effect. Professional oversight of communication demonstrates care, whether for internal or external audiences. Currently, synthetic media cannot produce high-quality messaging without human refinement.
We already know the calibre of internal communication directly reflects organisational values and shapes internal culture. It establishes the organisation's voice and tone, significantly influencing colleague engagement and performance. But in today’s transforming tech landscape, that considered voice and tone increasingly needs to work hard outside of our organisations, too.
GenAI faces a growing accuracy crisis and mounting challenges emanating from ‘hallucinations’. These are fabricated information presented as fact by the potent large language model (LLM) engines that power chatbots such as ChatGPT, Claude and Gemini. Paradoxically, it appears the more sophisticated AI systems become, the more they’re prone to generating false content.
Recent ‘reasoning’ models from major players in the sector highlight alarming trends. OpenAI's o4-mini model hallucinated 48% of the time, whilst its o3 model achieved a 33% hallucination rate – double that of earlier versions. Google and DeepSeek's competing models exhibit similar problems, suggesting there may be a wider crisis across the industry.
In tandem, Apple has identified ‘fundamental limitations’ in cutting-edge AI systems. A study just published points to large reasoning models suffering "complete accuracy collapse" when tackling complex problems, with performance deteriorating as problem difficulty increases. This challenges assumptions about AI's trajectory towards artificial general intelligence.
Security vulnerabilities compound these issues. Research from Ben Gurion University in Israel reveals most AI chatbots can be easily ‘jailbroken’ to bypass safety controls, potentially providing dangerous information, including instructions for illegal activities. The threat from these compromised ‘dark LLMs’ is described by researchers as "immediate, tangible and deeply concerning."
Some experts suggest hallucinations may be inherent to current AI technology. As companies pour billions into scaling up AI infrastructure, fundamental questions emerge about whether larger models will deliver promised improvements or simply amplify existing problems.
Bringing the focus back to organisations, this serves to underline the importance of human oversight of GenAI in workplace communications, to ensure outputs reflect brand values and respect all colleagues. Modern workers expect transparency and authenticity, yet AI-generated content risks appearing inauthentic and may contain misinformation or copyright issues. Careful organisational supervision is required to maintain accuracy – and ultimately reputation.
Internal communicators can help bring greater awareness of GenAI’s potential for inaccuracy by:
Simply put, GenAI’s susceptibility to hallucinations means regular audits of AI tool usage and maintaining human oversight of communication outputs will be mission-critical safeguards for the foreseeable future.
Is there a chance we've all drunk a little too much AI Kool-Aid in the past few years? Because, despite the hype and column inches, the widespread adoption and the billions invested in AI infrastructure, emerging research suggests the technology may in fact have a surprisingly limited impact on worker productivity and earnings.
A comprehensive study by economists from the University of Chicago and the University of Copenhagen analysed 25,000 workers across 7,000 workplaces. It found AI users saved merely 3% of their working time. More disappointingly, only 3-7% of these modest productivity gains translated into higher wages.
Whilst AI excels at specific tasks – e.g., software development, copywriting and drafting documentation – broader occupational benefits prove minimal, according to the research.
The crucial question emerging seems to be not about the time that’s saved, but how workers utilise any marginal efficiency gain, with over 80% of the study’s participants dedicating the meagre AI-mediated time savings to additional work.
Real-world examples support the study’s findings that AI is yet to prove to be some sort of commercial ‘silver bullet’. Klarna's aggressive AI customer service replacement strategy contributed to £77 million losses in Q1 2025, double the previous year's figure. Duolingo's CEO recently reversed plans to replace contract workers with AI, following a customer backlash. Meanwhile, IBM surveys indicate only a fraction of corporate AI initiatives deliver meaningful returns on investment.
These examples and the study’s findings also challenge prevailing narratives about AI's transformative potential in the labour market. While adoption has been rapid, with firms now heavily invested in unlocking the technological potential, the economic impacts remain small, which challenges concerns around the threat of imminent job displacement.
Contrary to fears about AI-driven unemployment, The Economist reports that white-collar employment has actually increased over the past year, implying other factors may be driving current job market concerns.
This all points to an interesting reality check on AI's workplace revolution.
Rather than eliminating jobs or dramatically boosting productivity, the technology – up until now at least – appears to offer incremental improvements, whilst failing to deliver the transformative economic benefits promised by industry advocates.
It also highlights the importance of transparent communication about AI's purpose and limitations, aligning perfectly with internal communication’s core strengths. It suggests there's tremendous potential for internal communicators to provide the clarity, information, support and strategic messaging that colleagues are clearly seeking.
Further recommended reading
Please accept {{cookieConsents}} cookies to view this content