What on Earth just happened?!

A whistlestop tour of the first two months of 2025

tl;dr Hype and noise up, new models aplenty, safety and regulation (mostly) down, UK Government has been busy, new Charity AI Taksforce, we must take everyone with us if we are to get adoption right and take the opportunity.

Every half-term I try to take a bit of a break from client facing work and summarise what’s been going on in the world of AI.  I can’t possibly do justice to everything, there will be loads that I’ve missed, but here’s some stuff that caught my eye

UK Government

  • The Government released its AI Opportunities Action Plan. It’s a 50-point plan that is bold and ambitious, in a way that I feel reflects the scale of urgency that is needed. However, with a heavy focus on deploying AI technologies in support of public services (again, much needed in many areas), there’s a lack of recognition of the need to take the public on the journey. I wrote about this here.

  • As part of the push on AI in public services, the Government announced a partnership with Anthropic. An interesting one this given all (I think) Government Departments are either Microsoft or Google native. I find myself increasingly values-aligned with Anthropic, and if you’re ever looking for a good way to spend 5 hours of your life, this Lex Fridman interview with Dario Amodei and Amanda Askell is the most fascinating listen.

  • The government also released an AI ‘playbook’ for Civil Servants/Public sector workers. At 118 pages, setting out 10 principles, it’s a pretty comprehensive run-through of all the considerations a public servant needs to make as they consider using AI technologies. However, as with many industry AI policies that I’ve seen, it’s trying to be everything to everyone, and doesn’t particularly help the person in the organisation who just wants to know whether they can use [insert favourite Gen AI tool] to help them generate some ideas.

  • The two more slightly surprising things were changing the name of the AI Safety Institute to the AI Security Institute, and declining to sign up to the Paris agreement. On the former, I had long assumed a move closer to the National security infrastructure of Government would be on the cards for the AISI, though the lack of safety focus concerns me.

  • More positively, the Government announced plans to regulate on Deepfake Pornographic images as part of the upcoming Crime and Policing Bill. There’s a lot more to be done around harms related to this, but it’s certainly a move in the right direction

  • The Department for Education also released further guidance for both schools and product safety guidance for EdTech providers. I gave my reflections on this here alongside my thoughts on…

US Government

  • I’m still trying to get to grips with everything on the other side of the pond, but we certainly know that Trump has

    • Torn up Biden’s Executive Order on AI Safety (which was mainly based on voluntary commitments already made by the big AI companies anyway)

    • Launched the $500Bn Stargate initiative (mainly private sector investment in the infrastructure required for further AI innovation and development)

    • Given the owner of one of the leading AI models access to vast amounts of sensitive citizen data

Global Governance

  • The French held the latest in the round of global governance talks on AI. While the first in these rounds of talks had focused almost entirely on ‘existential’ AI risk, the communique from the French talks focused on makeing sure AI was "transparent", "safe", "secure and trustworthy" and "Making AI sustainable for people and the planet,"

  • The US and the UK both refused to sign up to the agreement. According to the BBC, the UK Government said it "agreed with much of the leader's declaration" but felt it was lacking in some parts. We felt the declaration didn't provide enough practical clarity on global governance, nor sufficiently address harder questions around national security and the challenge AI poses to it,".

  • Dario Amodei shared his views on the summit - an opportunity missed - which I find myself in agreement with!

AI Models

  • After the flurry of new announcements in the buildup to Christmas from many of the ‘Frontier’ model companies, the Chinese ‘Deepseek’ model burst onto the scenes from nowhere, demonstrating equivalent levels of performance for - apparently - a comparatively minisclue investment in terms of $ and compute. This immediately made a massive dent in US tech stocks, with Nvidia losing $600Bn of its stock value.

  • Open AI reacted by making its o3 Reasoning model available to Plus users, and also announced its ‘Deep Research’ model (matching its Google counterpart of the same name) which has been made available to Pro ($200/month) users. If you want to try a ‘Deep Research’ model out for yourself, head to https://www.perplexity.ai/ and use the dropdown in the prompt box

  • Google Workspace organisations got a late Christmas present, with Gemini suddenly included in Workspace packages, offering enterprise-level protection and the ability to work with your own documents.

AI and Work / AI and Civil Society

  • At a recent event hosted by the National Lottery Community Fund, a new Charity AI Taskforce was announced, which we’re delighted to confirm that AIConfident is a member of. Lots more to come on this…

  • Friends of the Earth put out a paper titled ‘Harnessing AI for Environmental Justice’, which as well as exploring the subject from a number of angles, has a set of principles which will be helpful to organisations looking to balance their AI use with their environmental considerations. We have also seen the publication of an AI Energy Leaderboard, though this is probably of more use to developers than to individual users.

  • Anthropic published a paper on the use of Claude in organisations. I’ve not got into the detail yet, so here’s Ethan Mollick’s take!

Next
Next

Please Sir, I Want Some More