Webinar 3

2023 will be the year when purpose-driven companies will need to grapple with the implications of their use of Tech. I don’t mean cyber security or data compliance – both of which have well-established places on the risk register. I mean the way technology is used throughout the business and its human impact on employees, customers, other stakeholders and wider society. Technology is not neutral (1), and without clear values guiding its use – can we ensure it is accountable, human-centred, doesn’t discriminate and does more good than harm?  

Why now?  

Two major developments this year will bring the topic into sharp focus. Artificial Intelligence (AI) regulation is imminent, with the EU AI Act (2) due to be finalised this year. Its breadth of scope makes GDPR compliance look like a walk in the park. The UK and the US also have measures in development (3). Companies using AI will need to classify their use according to the regulatory risk categories and introduce mitigations where necessary. According to the latest McKinsey research (4), this means over half of all companies.   

Chat-GPT is the other trigger to act. LinkedIn and Twitter have been full of examples and speculation about this leap forward in generative AI in recent weeks. This tool, developed by Open AI, uses natural language so you can ‘chat’ to ask a question, and in seconds it whizzes through everything ever published to produce a competent answer that sounds as though it was written by an informed human. It has also prompted a discussion around ethical and legal questions, not least copyright. (5) 

Who is affected?  

Any use of AI brings potential exposure to this legislation, and in today’s digital economy, that’s everyone, not just tech companies. Importantly for a purpose-driven organisation, the way you are using these tools will have an ethical profile – whether by accident or design. To be clear – this cannot be achieved simply by adding ‘ethics’ to the tech team’s design agenda – it needs independence, ethics training and teeth to be effective.   

Many companies have AI embedded across multiple functions. Below are three examples: 

HR – many companies use AI to recruit from a larger pool of potential candidates. However, algorithms ‘learn’ from data sets that may contain bias, as Amazon discovered at great expense (6) when their search tool taught itself to exclude women. The use of facial recognition technology is also less reliable for those who are not white and male (7, 8. 9).    

Pricing – In sectors such as leisure, tourism, transportation or retail, algorithms have a role in pricing, yet there is evidence of collusion with surge pricing (9). For insurers or those with probability-based pricing, key concerns would be non-discrimination, as well as accuracy, reliability and representativeness of the data sets used to train the model.   

Biometric data – what happens to photographs of staff and visitors taken for security and internal records – is it stored, destroyed or sold? If dealt with by a third party, do you know what they are doing with it? Data selling is a big business, and faces are in particular demand. How would your visitors feel if coming to your office meant their face is used to train machine learning models?  

Why does it matter?   

How these questions are answered has real-world implications for stakeholders – especially customers and staff. If high-quality talent is systematically screened out during recruitment – performance suffers, candidates suffer, and any DEI commitments are undermined. As well as fairness and non-discrimination, AI ethics frameworks focus on the need to be human-centred and respect the dignity of individuals – and the examples above illustrate how the everyday use of AI could fail such tests if not implemented thoughtfully.  

There is much at stake. As well as the financial cost, there is a potential impact on employee well-being, customer loyalty, talent recruitment and retention. AI has extraordinary potential to innovate, pioneer breakthroughs, improve customer experience and take the drudgery out of work. This blog does not advocate luddite-like resistance, or should that be Canute-like! AI is an incredibly powerful tool that should be welcomed but implemented with care precisely because of its power and the extent of its impact.   

What can I do?  

Huge strides have been made in recent years to cohere around key principles and operationalise them (11). A growing body of useful tools and resources includes input from the Vatican. As public awareness grows, with incoming regulation and high-profile developments like Chat-GPT, companies will be expected to be managing these risks and treat their stakeholders with the same values using technology as they do without. This won’t happen by accident, and companies that wish to show integrity will need to be intentional about ensuring that their corporate values are reflected in the way they deploy and design technology.  

Ethical failure more often results from blind spots rather than bad intent. The best practice in AI is ‘Ethics by design’ – incorporating diverse perspectives, independent review and setting priorities at the outset. Trade-offs are likely, but approaching this intentionally enables companies to prioritise in a way that reflects their values and purpose, thereby showing integrity and offering a robust defence in the event of unforeseen difficulties.   

Executive teams should ensure that responsibility for AI ethics is clearly defined, resourced and structured in a way that enables genuine challenge, as well as ensuring that their AI strategy is aligned with corporate values and purpose. Boards can play their part by ensuring they understand where the business is carrying AI ethical risk, holding the executive team to account, and could consider adopting one of the public frameworks, such as OECD principles (12). or IEEE standards (13) 

Those who get this wrong may pay a heavy price, not only in regulatory fines but reputational damage but with less than 20% of AI-using companies considering these issues. There is an opportunity to show leadership, build digital trust as well as benefit from greater resilience and scalability.