As I See It - It AIn’t Over-Regulated Till AI Say So

By Professor Michael Mainelli
Published by London Business Matters (September/October 2025), London Chamber of Commerce & Industry, page 10.

It AIn’t Over-Regulated Till AI Say So

To paraphrase Robert Solow’s 1987 paradox about computers, "you see AI everywhere except in the productivity statistics". Solow was highlighting the discrepancy between the widespread adoption of information technology (IT) and its perceived lack of impact on productivity growth.

Widespread Generative AI isn’t three years old, so it may seem a bit unfair to shout ‘bubble!’ this early. However, bubbles cause troubles. AI might become a bubble comparable to the 2000 Dot.Com era. In 2025 the broad tech sector accounts for 34% of the S&P 500's market cap, exceeding the previous record of 33% set in March 2000. On the other hand, rising forward earnings valuation multiples still remain lower, around 30, compared with 50 during the Dot.Com era.

According to Bill Janeway, Founder of Warburg Pincus’ High Tech Investment team, ‘productive bubbles’, where money is thrown at ideas with little caution, are essential to realising huge, market-transforming, disruption. These productive bubbles emerge from irrational decisions, whether irrational investments or, sadly, war.

Equally, bubbles lead to calls for regulation, and AI is no different. The EU and many US states are well down paths to regulation. They define a wide-variety of techniques from expert systems to large-language models to machine learning as AI, and even throw in ‘logic programming’, i.e. all of computing. Here in the UK, with service exports being nearly 60% of total exports, the regulation of all of computing cross-border should terrify us. Yet we need to respond to society’s desire to have AI regulated. We need to secure a stable, international approach to regulating AI or we risk losing huge markets.

Fortunately, we have some good examples of inter-operable international regulation to call upon, community-enforced standards markets via the International Standards Organisation (ISO). ISO standards enforced by accreditation and certification are at the heart of food safety, aviation safety, or maritime safety. Why not AI safety?

Lucky guess, yes, AI experts saw regulation coming and began developing the ISO/IEC 42001:2023 “Artificial Intelligence Management Standard” some years ago for just this purpose. ISO 42001 is an international standard for responsible management of - specifying requirements for establishing, implementing, maintaining, and upgrading AI within organisations, including managing AI risks such as bias, data security, privacy, IP, energy use, and lack of accountability. Firms can be certified to be compliant with the standard, and use those certifications in tenders or supply chain regulation.

We in the UK should welcome an open standards approach to regulation that shares its structure with the familiar ISO 9000, 14000, 27000, 65000, and others, providing us with an international passport to provide cross-border computing services with embedded AI.

The UK inspired AIQI Consortium aims to promote open dialogue and collaboration, supporting global quality infrastructure bodies such as BSI and UKAS, in ensuring the safe, secure, and ethical development of AI technologies. Find out more at www.aiqi.org.