April: What AI audits, frigates and the Bank of England have in common
- emma56918
- 2 days ago
- 6 min read

For those of us who are UK based, we’re finally celebrating sitting outside with just a jumper. It’s been a long winter, although we suspect we say that every year.
Some of us, Emma, actually had the opportunity to double down and spend Easter in Tallinn. She swam in the Baltic, and yes, it was literally baltic at 0.8 degrees. Lauren made a smarter choice spending the recent long weekend eating pasta in sunny Rome.
But enough about our trips abroad. We’ve also been working hard. We spent most of March building and refining how we actually deliver data visibility to clients, turning what is usually a months-long mapping exercise into something that can be stood up in weeks and looks pretty. It’s been a lot of time in the weeds on data structures, ownership, and workflows, but the outcome is simple: people finally being able to see what they have, who touches it, and where the risk actually sits.
This month we’re looking at (shock) why AI requires actual governance, why digital transformation is hard, and why authors should be competing with AI. Read on.
Not so shy AI
Shy Girl, a horror novel about a woman forced by a kidnapper to impersonate a dog was released in Britain and due out in America when vigilante readers detected signs of AI involvement. The book was withdrawn from publication. The Economist uses this as a jumping-off point to examine what AI prose actually looks like and whether it matters. LLM-generated creative writing has recognisable tells such as excessive and repetitive metaphors. But as the piece pushes back, romanticising human creation is easy despite so much of it being terrible too.
Pit an LLM against Jane Austen, and there’s no contest. But pit it against a 99p kindle find and it seems much more competitive. Authors and publishers are exploring certification schemes guaranteeing a title was written by a human, but as people start to need to pay this may prove not to be the easy solution many expect. The moral of the Shy Girl saga isn't that AI writing is bad or should be banned. It's that it has to be beaten.
We like this piece as it's actually not an angle that we’d considered, although perhaps we should have. The point is correct. Soon humans will be the ultimate luxury.
Banking Blind
The FT writes that Starling Bank's CIO Harriet Rees has proposed to the Department for Science, Innovation and Technology that the UK government should conduct standardised, independent testing of the general purpose AI models being used across UK banking. Right now every bank is doing its own evaluation (or not), and there's no law requiring AI models to be tested before use in regulated industries.
This is surprisingly sensible from Rees, but we are slightly sceptical about increased regulation as the solution to everything. However, the part we really struggle with is that this is a governance gap dressed up as a testing problem.
The proposal is focused on if the models are ok, when the inputs are where the real risk sits. Every bank is feeding different data, with different quality, different biases, different consent frameworks, into the same handful of US-built LLMs. The UK government is being asked to assure models it didn't build and can't control. Testing the models centrally doesn't tell you anything about whether Barclays' lending data or HSBC's fraud detection inputs are fit for purpose. It's the classic mistake of governing the technology rather than governing the data.
AI gets boring (on purpose)
AI is taking over auditing with EY rolling out an AI-powered audit platform this month that speeds up risk assessment and pre-fills forms (side note, EY's annual tech budget is $1b!) and KPMG piloting agentic AI. On the regulatory side, the UK's Financial Reporting Council has just published what it says is the first guidance anywhere for audit firms on the use of generative and agentic AI. The FRC's line is that firms need processes to check and mitigate each risk, and crucially, accountability stays with the humans, you can't blame the technology.
We don’t know much about auditing, other than that it sounds boring but important. From our own experience, while AI is a fantastic tool, the accountability chain is only as strong as your ability to trace what went wrong, and agentic AI (where one AI is coordinating other AIs) makes that audit trail significantly harder to reconstruct. Good luck to the FRC on that.
Our bet is that the companies that get AI in audit right won't be the ones with the fanciest orchestration layer, they'll be the ones that fixed the data quality, built the audit trails, and solved the boring problems first. Sound familiar?
Model Behaviour
Scale AI, 49% owned by Meta, pays tens of thousands of gig workers via its Outlier platform to train AI, except it turns out that the dream isn’t quite there. The pitch is "become the expert that AI learns from", flexible work for people with strong credentials in fields like medicine, physics, and economics. But the Guardian spoke to 10 current and former workers who describe a very different reality. Workers describe scraping personal data from Instagram and Facebook accounts — tagging people by name, location, friends, including under-18s. They've also been asked to transcribe pornographic audio, label dead animals, and harvest copyrighted artwork. Many describe bait-and-switch pay, constant surveillance via screenshot software, and the feeling of training their own replacements.
This is the sausage of AI, and it isn’t sexy. If your organisation is deploying these models (and no shade, because we do too), this is what sits underneath them. The next time a vendor tells you their model is enterprise-ready, ask them where the training data came from. And if they can't answer, that's your governance problem now.
A pleasant surprise
MPs were surprised to find themselves reviewing a successful government IT project, the Bank of England’s £431m replacement of its core payments system. During the hearing, MPs learned that clarity from the off, an environment of openness, close integration with suppliers, and a one team culture were, among other things, key to the project’s success.
The lessons aren’t particularly glamorous. The Bank knew what it wanted from the start. It defined thousands of requirements upfront. It took its time on procurement. It treated suppliers as part of a single team. And it built a culture where people could actually raise problems early.
In other words, it did all the boring things properly. The uncomfortable takeaway is that this wasn’t a technology win. It was a planning, governance, and culture win. And a very nice counterbalance to the usual news contained in The Fri-Up.
A failed frigate
This is a story about Germany's F126 frigate that was supposed to be the country's largest warship since WWII and the biggest-ever contract for Dutch shipbuilder Damen, which won the €5.3bn tender in 2020 to build four of them. It has since become one of the country’s biggest defence procurement disasters. Why? Damen made a fateful early decision to switch to new software. It was a disaster.
The software failure is only half the story. German procurement bureaucracy was equally dysfunctional: 7,000+ specifications (down to which door handle and light switch to use), insistence on paper documents, routine rejection of English-language submissions, a 20-day internal approval deadline that was "never" met, and a complex hierarchy that sent decisions up and down endlessly.
Damen is now being stripped of lead contractor status. There is an obvious cautionary tale here for European defence cooperation. But as a team that’s led a migration, digital transformation is really hard, so we’re very sympathetic. So let this be our two cents, software integrations suck, so don’t do it while you’re tackling another huge project, and when you do, invest in as much change management as you can afford.
Watercooler Chat
A section of the things we like that keep us sane while running a small business…
What Broke the Beckhams? A long read from New York Magazine on the Beckham family drama. Fun fact, Brooklyn has hired Harvey Weinstein's former spokesman…
Books don’t grow on trees: A substack devoted to the law and all its quirks by Will Bowes.
Vesta in Tallinn: Some of the most fantastic food we’ve had in ages. Book in advance.
Em the Nutritionist's easy pasta sauce recipes: Made one last night, delish. And yes, we are hungry at the moment.


Comments