AI and purpose led business – setting the frame: from dystopia to deliberate choice
This is a narrative summary of an event on AI and purpose led business in April 2026, hosted by Clarasys:
The event opened with a deliberate reframing. Rather than rehearsing familiar dystopian narratives about artificial intelligence—job loss, loss of control, the “end of humanity”—participants were invited to focus on a simpler but harder question: what do we actually want AI to be for?
As Sarah Gillard, CEO of Blueprint for Better Business, put it early in the evening, AI provokes an unusually wide emotional spectrum: “on the one hand, abundant utopia; on the other, the end of humanity.” Blueprint’s starting assumption, however, is that neither outcome is inevitable. The direction AI takes depends less on technical capability than on the choices made by leaders, institutions and systems.
Early audience contributions surfaced hopeful aspirations: AI as a democratising force, making powerful technology more accessible; AI accelerating solutions to major sustainability challenges (including energy transitions); AI enabling fraud prevention, better health, education and reductions in loneliness. Yet what became clear, even at this opening stage, was a sharp contrast between how people hope AI might be used and how it is currently discussed in many organisations.
“What we really want,” Sarah observed, “is technology to be deployed in service of our fundamental human needs.” Business conversations, by contrast, still concentrate overwhelmingly on efficiency, productivity and cost reduction. Those aims are legitimate, she acknowledged, but “they’re not necessarily the pinnacle of our human flourishing requirements.”
This tension—between narrow economic optimisation and broader human outcomes—ran like a thread through the rest of the discussion.
Blueprint’s lens: business, dignity and the common good
A Blueprint for Better Business situates the AI debate within a wider critique of how business understands its role in society. Founded in the aftermath of the financial crisis, the charity emerged from the recognition that when business becomes detached from societal purpose, trust erodes and systemic harm follows.
Two deceptively simple ideas underpin Blueprint’s work. First, businesses are not machines for profit but communities of people creating value for society, with profit as a necessary fuel rather than the destination. Second, people must not be instrumentalised—employees, customers and future generations alike possess inherent dignity.
Applying this lens to AI exposes a worrying pattern. Much current AI deployment is framed in highly financialised terms: how to do the same things faster, cheaper and with fewer people. Blueprint’s concern is not that efficiency is wrong, but that it is radically insufficient as a guiding star for a technology with paradigm‑shifting potential.
As Sarah noted, AI often acts as a Trojan horse: a way for boards and executives to surface bigger questions about impact and responsibility in an otherwise polarised and politicised environment. The challenge is whether organisations are willing to let those questions genuinely shape decisions—or whether they default back to familiar metrics.
Leadership, technology and the “pro‑human” choice
Rob Garlick’s contribution grounded this abstract framing in both historical perspective and stark economics. Having spent three decades in technology and finance, he was unequivocal: “It is different this time.” AI, especially when combined with robotics, introduces an unprecedented dynamic—technology that can perform large classes of work “99% cheaper than humans, at a pace which is so much quicker than we can.”
He cited research suggesting that 31% of people in finance and 32% in technology believe they may not have a job within five years. Anxiety about wages, job availability and social stability is therefore not hypothetical; it is already present in the workforce.
Yet Garlick was equally clear that none of this is predetermined. “That is around the choices that we decide to make. It is leadership choices.” The core problem, in his view, is the collision between two forces: extremely progressive technology and an economic system optimised almost exclusively for shareholder return.
His proposed response is deceptively modest: start by agreeing that we want AI outcomes to be “pro‑human.” From there, he argues, leaders can begin to align around outcomes such as better jobs, sufficient employment, safety nets and worker agency—even if they disagree on ideology. The haunting reference point he offered came from Einstein’s reflections on nuclear technology: “Remember your humanity, and forget the rest.”
The warning was implicit but unmistakable. Just as nuclear power could electrify cities or destroy them, AI’s direction depends on whether human considerations are embedded upstream, or bolted on too late.
Investors, incentives and the limits of agency
Annabel Gillard extended the analysis from leadership to systems. Drawing on her background in financial markets, she highlighted the structural tension between human agency and institutional constraint. Quoting biologist E.O. Wilson, she observed: “We have Paleolithic emotions, medieval institutions, and godlike technology.”
This mismatch explains much of the current discomfort. While AI offers a chance to rethink how work, value and systems are designed, decision‑makers remain embedded in incentive frameworks built for a different era. Financial markets, in particular, exert powerful pressure. As Charlie Munger famously said: “Show me the incentives and I’ll show you the outcome.”
The opportunity, Annabel argued, lies in recognising that AI enables—not just demands—system redesign. Because processes, workflows and organisational models are being rethought anyway, leaders technically have a “blank sheet of paper.” Investors, therefore, face a choice: reinforce short‑term profit maximisation, or reward courageous, longer‑term bets that unlock more durable value.
While sober about the difficulty of this shift, she emphasised that binary narratives—killer robots versus utopia—are unhelpful. They imply a lack of agency at precisely the moment when agency matters most.
Responsible AI inside large institutions: NatWest
Paul Dongha’s reflections from NatWest brought the discussion from theory into operational reality. As Head of Responsible AI, he described a deliberately cautious approach. NatWest describes itself as “AI‑first,” but only within tightly defined boundaries. “We are naturally risk‑averse,” he said plainly. “We don’t use it to manipulate customers.”
NatWest deploys AI largely in back‑office operations, fraud investigation, complaint handling and software engineering. Transformation is incremental rather than wholesale. Central to this approach is a formal Ethical Impact Assessment process: every AI use case is required to pass through a rigorous set of questions covering not just bias and accuracy, but environmental sustainability, social impact and effects on colleagues.
Crucially, this assessment gives ethics real influence. Recommendations from the ethics panel feed directly into model validation, meaning teams cannot simply acknowledge concerns and proceed regardless, the process “forces people to answer questions they hadn’t even considered.”
On workforce impact, he was candid. While rhetoric around augmentation is common, “there’s work to be done to make sure that actually happens.” NatWest is exploring how existing roles can be uplifted and what new roles would exist in the future. Still, he acknowledged that the broader economic system continues to reward cost‑cutting, making these conversations challenging even within a purpose‑led organisation.
Regulation, in his view, plays a constructive role. Though imperfect, frameworks like the EU AI Act establish important guardrails around privacy, fairness and misuse. To that extent, his team is leading a bank-wide program to ensure NatWest are compliant with the EU’s AI Act. Without these, the risk of accidental or cumulative harm—especially when AI systems interact in unanticipated ways—rises sharply.
Reframing value: Nationwide’s experience
Graham Burns from Nationwide offered some illustrations of how asking different questions can change outcomes. Nationwide made an early strategic decision that AI would not interact directly with customers; all 135 existing use cases were back‑office focused. This alone reflected a conscious tolerance for slower adoption in exchange for trust.
However, the more significant shift came from reframing business cases. Graeme described supporting a contact‑centre AI use case project that combined call transcription and vulnerability detection. Initial proposals focused on short term productivity/call time savings. In practice that would mean more calls for employees but that could impact the experience for potentially vulnerable customers, who often needed more time not less. In addition employees wanted more time to support customers at the point of need, which they felt would reduce the likelihood of repeat calls from frustrated customers.
He shared how stepping back from the contact centre business case and widening the scope to protect the customer and colleague experience created benefits such as reduced complaints and an increase in recommendations/referrals. These benefits would create more value than the original time saving, but were overlooked as the use case was presented by one part of the operation. His advice was that the business should actively consider what they do with additional capacity created by AI in use case development, and consider business cases in broader terms than traditional IT deployments.
Nationwide also shared examples of repurposing AI capabilities for social good, such as adapting internal booking technology to support Dementia UK by enabling easier appointment scheduling with specialist nurses. These stories, Graeme noted, became powerful internal signals that AI could align with purpose rather than undermine it. Employees were wary of new AI initiatives and organisations need to consider how narratives spread in organisations and undermine future AI projects before they’ve begun.
Importantly, he did not present Nationwide’s journey as a series of unbroken successes. Some projects, particularly in software engineering, produced mixed results and hard lessons—publicly acknowledged rather than quietly buried. This openness itself was framed as part of building organisational trust.
Consultants, pace and the risk of asking the wrong questions faster
From a consulting perspective, Lindsay Cameron from Clarasys highlighted how AI is changing the tempo of organisational decision‑making. Where traditional transformation journeys suffered from long translation chains and friction between teams – for example the conversion of a programme’s strategic intent through to documented business requirements into developed code, AI increasingly allows ideas to be visualised and prototyped almost immediately.
This acceleration, however, raises a different risk: “If you ask the wrong questions, your design could end up at the wrong place very quickly.” For Lindsay, the value of frameworks like Blueprint’s lies less in providing answers than in slowing organisations down just enough to orient them toward better questions and drive alignment and consensus on directionality before speed takes over.
At several of her clients, she has observed a consistent maturity curve. Initial reactions are often defensive, blocking the use of AI tools entirely, followed by either cautious experimentation or a rapid acceleration, depending on leadership appetite, technological capabilities and budget. Only once people have used AI or seen what it can do in practice do they become receptive to deeper conversations about ethics, value and societal impact. Trying to start with those conversations too early, particularly at board level, often fails.
Trust, suppliers and the limits of control
A recurring concern from the discussion with the audience was trust—not just in how organisations deploy AI internally, but in the technologies they procure from powerful suppliers. Even highly trusted institutions risk reputational damage if customers perceive them as dependent on opaque or ethically questionable models.
Both NatWest and Nationwide acknowledged the dilemma. Opting out of dominant platforms entirely is rarely realistic. Instead, they emphasised supplier due diligence, ethical requirements embedded in procurement processes, and pushing—persistently—for better standards.
This was described as an exercise in collective pressure rather than individual purity. Asking the question repeatedly, even without immediate leverage, was portrayed as part of shaping future norms.
From efficiency to resilience and meaning
As the evening progressed, the conversation widened from organisational practice to societal implications. Several speakers argued that the dominance of efficiency as a decision‑making lens has already eroded engagement at work. AI risks intensifying this unless leaders actively pivot toward resilience, meaning and imagination.
Transformation, Annabel Gillard noted, already fails most of the time—70–80% by some estimates. AI, as the most profound transformation many organisations will ever face, cannot succeed using the same playbooks that produced disengagement and distrust.
Underlying many exchanges was a deeper question: what is work for, if machines can increasingly perform tasks faster and cheaper? Participants suggested that AI might force societies to confront this question rather than defer it. Whether this leads to flourishing—shorter working weeks, richer human roles, more creativity—or to backlash and unrest depends on whether choices are made deliberately, early enough.
As one speaker put it bluntly, systems rarely change because they are persuaded. They change when the cost of not changing becomes unbearable. The ambition of the Blueprint framework, and of conversations like this, is to ensure that intentional design precedes crisis.
Closing reflection
The event ended where it began: with a sense of agency. While few participants underestimated the power of existing incentives or the momentum of large technology providers, there was also a clear conviction that asking better questions matters.
AI, repeatedly, was described not as destiny but as a mirror—reflecting the values, courage and imagination of those deploying it. Whether it becomes another tool for narrowing value to numbers, or a catalyst for re‑humanising business, remains undecided. But as the discussion made clear, that decision is already being made, case by case, board by board, and system by system.
Our Framework on using AI can be seen here: A-framework-for-using-AI-booklet-draft-1.pdf