Joe-Zammit-Lucia-1

Author

Joe Zammit-Lucia

Joe Zammit-Lucia is a RADIX Co-Founder and board member. He is an entrepreneur and commentator on business and political issues writing in outlets in the UK, US, Germany and the Netherlands. His particular interest is the relationship between business and politics.

What on earth to do about AI

1773074083586

In 2018 Google took itself out of the Maven contract with the US Department of Defense after a staff outcry against the company agreeing to let its AI technology be used for military purposes. That same year, Microsoft CEO Satya Nadella declared that he found the Administration’s child separation immigration policy ‘abhorrent’ and reassured employees that Microsoft technology was not being used for these purposes.

Now it is the turn of Anthropic and its CEO Dario Amodei to get into a public spat with the Department of War as to the purposes for which the company’s technology may be used. Amodei wanted two exclusions – mass surveillance of American citizens and use in fully autonomous weapons. The Department was not willing to agree.

Since then, OpenAI has reportedly signed its own agreement with the Department of War claiming that the agreement contains the same protections that Anthropic was seeking. Some have disputed this last statement. None of us know the contractual details. Yet there seems to be no shortage of people willing to shoot in the dark.

I will not get into my own views of the rights and wrongs of insisting on such exclusions nor the rights and wrongs of the Department’s resistance since my own views don’t matter a jot. Many will have different views on all of it. Views that will be expressed with strong conviction - all sides believing that their own position is self-evidently correct and morally unchallengeable. Yawn.

Instead, here I would like to address two broader aspects. How we regulate and manage the use of increasingly powerful AI tools and the process for conducting such negotiations.

AI Regulation

Here we are still all at sea.

According to Anthropic’s Claude:

“There are several broad regulatory approaches taking shape globally, though countries are at very different stages and have quite different philosophies.

Risk-based frameworks are perhaps the most influential model, exemplified by the EU AI Act. The idea is to categorize AI systems by the potential harm they could cause and apply stricter rules to higher-risk uses — like medical diagnosis, criminal justice, or critical infrastructure — while leaving low-risk applications largely unregulated.

Sector-specific regulation is the approach favoured in the US and UK, where existing agencies (FDA, FTC, financial regulators) extend their mandates to cover AI in their domains rather than creating a single overarching AI law. The advantage is that regulators with domain expertise handle AI in context; the risk is gaps and inconsistency.

Transparency and disclosure requirements are emerging almost universally — rules requiring that AI-generated content be labelled, that automated decisions be explainable to affected individuals, and that companies disclose when AI is being used in consequential contexts like hiring or lending.

Frontier model governance has become a major focus since 2023, with debates around mandatory safety evaluations before deploying powerful models, compute thresholds that trigger regulatory scrutiny, and requirements to share safety findings with governments. The UK's AI Safety Institute and similar bodies in the US and EU represent this approach institutionally.

Liability and accountability rules are being developed to clarify who is responsible when AI causes harm — the developer, the deployer, or both — which shapes incentives significantly.

International coordination is nascent but active. The G7 Hiroshima AI Process, the Bletchley Declaration, and efforts at the OECD and UN reflect attempts to align standards across borders, though binding international agreements remain elusive.”

The impact and potential impact of AI tools remains largely unknown. In such an environment what we need is exploration and trial and error. It is therefore desirable to have different countries and blocs explore different approaches to regulation. Given our current state of ignorance, rushing to single global standards would be undesirable even if it were possible.

It is to be expected that there will be many who are convinced that their approach is the right approach and will push for its widespread adoption. They should be given short shrift.

What is important is to watch carefully the impact of the different regulatory approaches and learn from what seems to work and what does not seem to. Given the pace of technological advances, it will be a difficult and tortuous journey with, as always, different groups all pulling in different directions. Things will doubtless go right in some ways and wrong in others. But, for the moment, letting a thousand flowers bloom seems the most reasonable approach.

Conducting sensitive negotiations

Let me start by saying that, in my opinion, companies have every right to have a view, sometimes a strong view, as to how and when their products and services get used. And Mr Amodei certainly seems to have strong views. Anthropic has also, helpfully, been at the forefront of expressing its views on appropriate regulatory frameworks.

Of course, things get complicated when we’re talking about matters of national security. Such discussions are always sensitive. Which brings me to what is maybe my main point about this spat – was it wise for Anthropic and Mr Amodei to take these negotiations public and conduct them on the front pages of newspapers?

Taking such sensitive negotiations public ends up narrowing the scope for finding reasonable solutions. Once pushed into the searing heat of the media and public debate each side tends to harden its position and does not want to be seen to ‘lose’. Further, I have commented before, engaging publicly in highly emotive and sensitive political issues is not often the prime skill set of business leaders.

As expected, taking this debate public has led to escalation. In retaliation, the Administration has threatened to blacklist Anthropic from all US government contracts. The company has vowed to challenge any such action in court. And on it will go.

Of course, we all know that, in the right circumstances, all governments can turn to the nuclear option – commandeering any company’s intellectual property. This would be a big step for the US to take given its staunch defence of IP rights protection over many decades. But it exists.

Where to?

It seems increasingly clear that AI tools will continue to develop – fast and maybe unpredictably. The challenge lies in the tension between AI as an economic and societal opportunity and the (largely unknown and unpredictable) risks that accompany any new technology that has a meaningful impact. The challenges are heightened in a world of eroding collaboration and increasing economic and geopolitical competition.

In such an environment, letting a thousand regulatory flowers bloom is likely the best option.

As for companies involved in this and other challenging areas, their sector and subject matter expertise will be crucial to the regulatory debate. However, while it has become fashionable for some business leaders to enter public debate about sensitive and difficult political issues, it is debatable whether that is helpful. The debate is difficult enough. Why pour fuel on to the fire?


This blog was originally posted on Joe's newsletter on LinkedIn.

Rate this post

Leave a comment

Please login or register to leave a comment on this post.