January: Time flies when you’re governing data
- Emma Dunn

- Apr 7
- 10 min read

When we were little we used to wonder why our parents were always going on about the new year sneaking up on them. Now, we understand. Somehow we have one foot in 2024 and another in 2026. But we’re glad to be back and in the office. Is it weird to dream about having two screens? Asking for a friend..
Lots happened last year, and even more is on the horizon for Team Friday in 2026. We turn two and we’re hitting our toddler years in style.
We are continuing to work with some amazing businesses in Australia, Singapore, the UK and the EU. And good news is we are finally feeling bold enough to open our books again, so if you have a data problem that was driving you bananas in 2025, get in touch!
Emma is going to be in Singapore and Australia this February, so if you want to catch up for a coffee and talk all things data, reach out. Lauren, Max and Geordie are going to be holding the fort here in the UK. So if you haven’t inspected our office yet, drop by for a chai (Lauren’s been banned from coffee).
Now, not sure if you follow us on LinkedIn - if not, you should! But we contributed to the recently released UK’s Data Standards for Smart Data Report. Which, spoiler, looks at how existing data standards can support future Smart Data schemes. What is smart data? Great question. Smart Data is a framework that lets people and businesses securely access and share their own data, with their consent (*much to Lauren’s dismay*), to get better services and outcomes. Basically, it’s Open Banking but applied to other sectors like energy, telecoms, pensions and retail. Think Australia’s Consumer Data Right meets Health Data Sharing Act, meets EU’s Data Act & Data Governance Act, meets Singapore’s draft Data Portability Framework. The goal is all the same to make data portable, standardised and usable, so it can power switching, comparison and innovation - the approach, well that is where it differs. All the good things. Read our blog about it here!
It’s a packed newsletter this week. From AI fails, to lost lobster and EU digital enforcement, we’ve got it all. Read on!
Claudius manipulated
Emma’s still laughing about this. The WSJ let Claude run its newsroom vending machine including ordering stock, setting prices, negotiating with staff in Slack. Shock horror but it went off the rails very quickly. Poor Claudius got talked into giving most items away for free, ordered a live fish, and approved a PlayStation 5 as marketing.
The setup exposed the data reality of most AI agents. The system was highly vulnerable to persuasive inputs and fabricated “documents” that hijacked its decision-making. Even with an upgraded model and a separate “CEO” bot for oversight, humans successfully staged a boardroom coup and zeroed out prices again. A perfect stress test for what happens when you give an agent autonomy, a budget, and a workplace chat full of mischievous colleagues.
It’s hilarious and we also love the idea of an office fish, but what it really shows is that AI agents are only as good as their operational data, guardrails, and governance. The lesson isn’t that agents are useless (I mean it did successfully order and get delivered a fish...), but it’s that the next wave of AI will be won by organisations that treat agents like technology not people: give them clean inputs, tight permissions, audit trails… and don’t let them near the company credit card without controls and oversight.
Lost Lobster
A $400,000 truckload of lobster bound for Costco was stolen via a very 2025-style supply chain scam. Criminals used phishing and impersonation to pose as a legitimate trucking firm, collected the load from a cold-storage facility, then disabled the GPS trackers and disappeared.
The logistics company coordinating the shipment says the fraud hinged on tiny weak points. A near-identical email domain, convincing fake IDs, and enough operational knowledge to match real trailer numbers and branding. And once the lobsters vanished, tracing them became almost impossible because food cargo doesn’t carry unique serial numbers, making it easy to funnel back into the supply chain - presumably now in someone’s black market seafood boil! The FBI is reportedly looking into it, and industry groups say this kind of strategic cargo theft is rising, with everything from copper to baby formula being targeted.
This isn’t just a “lobster heist” story, it’s a human story. On the surface, this is a classic cyber-meets-physical fraud story. But it’s also a reminder that even sophisticated data governance controls can’t fix everything (rich coming from us I know!). The weak points here weren’t caused by a lack of technology - but by (very understandable) human judgment in a busy warehouse environment and the fact that food cargo isn’t uniquely identifiable. Cameras, GPS, better vendor verification tools? Sure, they might help. But even then, you’re still betting on layers of control that might be turned off, ignored, or outsmarted.
To us, this is a great use case in where proportional governance matters. Not everything needs a blockchain. Sometimes the smartest, most strategic choice is accepting the risk, pricing it in, or insuring against it. That’s not being lax - it’s being commercially intelligent. Don’t get us wrong, sure, yes, we could design verified counterparty frameworks, tamper-resistant systems, and shared trust signals across logistics networks for clients like Costco and we would probably make a lot of money doing that. But, that isn’t the way we like to do things here at Friday. Our (somewhat controversial view) is that sometimes the better investment is in risk and cost modelling and sometimes, just a well-negotiated insurance policy. I guess that’s why we built our model that can right-size governance controls to quantifiable risks within clients actual organisational environment. And that’s how you go from stolen lobsters to data governance. Impressed?
Brussels means business
After many years focusing on drafting and dictating digital rules, in 2026 the EU is shifting to enforcing them. According to the FT, Brussels is ramping up action under the Digital Markets Act and Digital Services Act, with ongoing and new probes into how Big Tech controls access, competition and data flows. Unsurprisingly, the political backdrop is volatile, with the US threatening retaliation.
Ignoring Trumpian tantrums, this actually shouldn’t be viewed as just Big Tech vs Brussels. It’s a governance shift that will land inside organisations. Expect tighter expectations around data access, transparency, model training provenance, auditability, and platform dependency risk. In short: “trust” becomes operational. And the companies that will thrive won’t be the ones with the flashiest AI - it will be the ones with clean data lineage, clear consent and rights management, defensible AI training practices, and systems that hold up under scrutiny.
And maybe, just maybe, that’s not a bad thing. If it means EU businesses stop viewing data governance as another GDPR checkbox and start seeing it as a way to future-proof business models, the whole conversation shifts. This is about strategic resilience. About building systems that aren’t easily gamed, misused, or co-opted by external agendas.
To be clear: we’re not in the business of nationalism. At Friday, we work across the US, Singapore, Australia, the EU and the UK (and we’re only in year two). But we do believe that sustainable, independent business models matter. Just ask the EU’s media and publishing sector. After years of chasing short-term ad revenue, many now find themselves overexposed to platform monopolies, squeezed by opaque algorithms, and struggling to reclaim data, margins, and market share. Would they make the same deals with Google and Meta again, knowing what they know now? Probably not.
Data governance isn’t about bureaucracy. It’s infrastructure for trust and independence. And in a world increasingly shaped by AI, geopolitics, and data asymmetries, that might be the most valuable asset of all.
An amusing DFAT fail
A British ethical hacker, Jacob Riggs, just pulled off the ultimate proof of work. The Australian writes that he found a critical vulnerability in the Department of Foreign Affairs and Trade website in under two hours, and then got rewarded with one of the nation’s rarest visas. Riggs says he hacked the site after applying for the visa typically reserved for Nobel laureates and Olympic gold medallists because he didn’t have the usual academic trophies, so he decided to demonstrate real-world impact instead. DFAT acknowledged his disclosure publicly, and he believes that recognition helped tip his application over the line. Now he’s moving to Sydney to work in cyber defence. Jacob, we salute you.
From Manus to Meta
Meta is buying Friday’s favourite Singaporean AI agent startup Manus for over $2bn. Manus has built a loyal following for AI agents that can do deep research, build websites, and handle multi-step tasks, and Meta is snapping it up to fast-track its capabilities and scale the product through WhatsApp, Instagram and its business tools.
We’ve been watching Manus.ai like hawks - not because we have £2 bil to spare or even just because we admire the product (its model-orchestration approach aligns closely with our own methodology), but because it signals where agent AI is headed.
But as much as we admire the product, we’ve had to hold off. The issue? No, not the pricey and unclear token model (though it isn’t exactly a draw card) but something far more boring: the terms of use. Even under enterprise agreements, there are no IP or confidentiality protections for user content. And as a company that treats client data as both a privacy obligation and a value asset, we just won’t take that risk - no matter how elegant the tool.
That is why we were excited last year when the open-source community stepped in. Projects like OpenManus - developed by contributors in the MetaGPT open-source community (no, no correlation to Meta) enabled us to build and run Manus-equivalent agents locally, under our own governance model and with full control over confidentiality, data lineage, and IP! All while integrating the models of our choice.
But then you have to ask: why would Meta pay £2bil for technology that already has an open-source community alternative? Well, here’s the juicy bit. When it comes to AI agents, code is only part of the value story. Manus didn’t just publish code, it built a system that actually works at scale in the wild. It has real usage, real users, hard-won UX, product maturity, deployment learnings, market traction and you guessed it data - things an open-source replica can’t magically generate overnight.
To us, this looks like Zuck reading the room. The next AI advantage won’t come from one perfect model - it’ll come from whoever can assemble the best system, ship it fast, and embed it into everyday workflows at global scale. If parts of that system excellence are emerging in Asia, Meta wants to be early to it, not late. After all, who wants to bet against China?
See Ya FB!
Yann LeCun is leaving Meta after more than a decade as its chief AI scientist to launch a new venture aimed at what he calls advanced machine intelligence, and he’s using the moment to double down on his contrarian view: LLMs are useful, but they’re a dead end for true superintelligence. In an FT interview, LeCun argues the case for why his next bet is on world models trained on video and spatial data to build a deeper understanding of physics, memory, planning and cause-and-effect. He also gives a candid account of Meta’s recent turbulence, from the post-ChatGPT scramble to prioritise Llama to Zuckerberg bringing in Alexandr Wang to lead a new push.
Our view is that this is a good reminder to future-proof your AI strategy without locking yourself in. LLMs can deliver real wins right now, but the underlying tech is moving fast, and the “best” model or architecture in 12–24 months may look very different. The smartest move is to get your data foundations, governance, and interoperability sorted so you can swap models in and out as capability shifts, rather than baking one vendor’s assumptions into your workflows. Clean data, clear lineage, strong access controls, and portable architectures don’t just reduce risk, they buy you optionality.
It’s also worth noting that Faculty, the UK-based AI firm just acquired by Accenture, is the one we’ve been watching most closely when it comes to applied world modelling and digital twins to organisations. At Friday, we have been stealthily developing our own client-specific world models for some time now, with serious success, and Faculty’s trajectory only reinforces that we’re betting in the right direction. Their CEO, Marc Warner, has deep roots in decision intelligence and just so happens to be close with Sam Altman - yet another signal that the centre of gravity in AI may be shifting toward those who can model systems, not just language.
Is OpenAI going to fail?
OpenAI is heading into 2026 as a company with extraordinary momentum, and a very fragile cost base. The Economist describes Sam Altman juggling ever more bets (custom chips, e-commerce, enterprise consulting, even a consumer device) while the core business burns cash at scale. Leaked figures suggest OpenAI expects to spend $17bn in 2026 (up from $9bn in 2025) and will likely raise again. At the same time, the competitive gap is narrowing: Google’s Gemini is gaining ground, open models are improving quickly, and there are signs consumer subscription growth has slowed.
In response, OpenAI is pushing to diversify monetisation - exploring e-commerce inside ChatGPT, potential advertising plays, and more aggressively courting enterprise revenue through AgentKit, integrations, and tailored consulting.
Call us very interested observers, because we use OpenAI (plus Claude, Perplexity, Llama, Deepseek and others), but we’re also running a business, and we’re quite attached to this quaint idea of making a profit.
What we’re watching most closely, though, isn’t the GPU bill, it’s how OpenAI embeds itself inside organisations. Because in our view, that’s where the real moat is built. If it can help companies actually deliver value, reshape workflows, and support better decisions, then it won’t matter if the model wars level out or if Gemini wins a benchmark. What will matter is whose system becomes the nervous system of the enterprise.
From our seat, that’s the game: not just shipping features, but earning trust, enabling governance, and showing up where business happens. LLMs are impressive but getting them to land is the real test.
It’s vintage dahhhling
According to the AFR, Depop is starting to look less like a resale app and more like a data-driven department store: 45m users (triple five years ago), roughly 400,000 new listings a day, and a generation treating wardrobes as fluid inventory they can buy, wear, and resell. What makes it powerful isn’t just second-hand demand - it’s the marketplace mechanics. Depop turns messy, unstructured “closet cast-offs” into structured product data at global scale, then uses search, alerts, ranking, and negotiation nudges to create liquidity and shape what’s “in”. In other words, it’s where fashion happens because the data layer makes it frictionless to discover, price and transact.
At Friday, we’re fascinated by how you value a business like this. Despite accounting standards, measurement isn’t inventory or store footprint, but the quality of data, and how effectively it converts that into monetisation opportunities. We built our own models for this, but no standard or framework exists so the data value doesn’t always translate into the valuation (without heavy-duty negotiation that is).
The long-term question is whether Depop can turn its behavioural data advantage into durable profit (without alienating sellers via algorithms and platform risk). We’ll keep an eye out and let you know!
Watercooler Chat
A section of the things we like that keep us sane while running a small business…
How To Get Rich by Felix Dennis — The founder of men’s magazine Maxim turned poet wrote a book on entrepreneurship. We’re halfway through and loving it.
Harbour House Hotel in Flushing — Just outside of Falmouth in Cornwall. The food is excellent, the drinks list even better.
GemJar socks — It’s super cold in London, with snow on the horizon. So good woollen socks are a must.
Equafleece dog jumpers — Misleadingly named, these are the best jumpers that a pooch can have. Waterproof and toasty warm. Gio and Sunday both have several!

Comments