Fool me once

WRITTEN BY JISOO KIM

INTRO NOTE


Something has changed in how we relate to information online. Most of us can feel it.
AI-generated images, videos and voices are now cheap, fast and convincing enough to fool anyone on a bad day. The result isn't a world full of people believing everything they see. It's a world where people have stopped trusting anything at all.

That loss of trust has a cost. Democracy doesn't run on perfect information – but it does need a shared baseline of reality. When that erodes, so does everything built on top of it.

So it’s probably good that I sat down with Callum Harvey for a chat. He’s currently doing his PhD at the Oxford Internet Institute researching cyber threat intelligence and AI policy. His prior roles span the Australian Department of Industry, Science and Resources, CyberCX, and the Harris Cyber Policy Initiative at the University of Chicago. We talked about where we are at with mis/disinformation and deepfakes: what's working, what isn't, and why the answer probably starts smaller and closer to home than we think.

THE INTERVIEW

JK: I want to start with the landscape. When you look at AI-generated misinformation and disinformation right now, what are we actually dealing with?


CH: The way you can look at this in a very hopeful sort of way is that yes, there is a lot of mis and disinformation  online, but we also don't have a particularly great read as to whether any of it's effective. There is no easy way to measure the effectiveness of any kind of influence operation. That goes back to the Cold War if not before. If you read Thomas Rid's book, Active Measures, both sides were doing leaflet drops over East or West Berlin. It was a specatacle, but we don't know if any of that worked. And the same is true today.


JK: So we don't actually know if mis/disinformation and deepfakes are working?


CH: We don't. And that's actually kind of good news, in a way. But the inverse of that is you can't measure the policy response either. So people don't need to roll over and give up on this – they definitely shouldn't – because people do believe information that runs counter to fact, and that sucks. We need to build up people's media literacy to counter that. A lot of it is in the eye of the beholder, which I find really fascinating. A lot of people in government talk about how difficult it is to counter disinformation emerging from Russia and China, but we have no metric for whether it's doing what it says on the tin.


JK: Let's talk about Romania, because I know you’ve done comparative studies on it and I think it's one of the most concrete recent examples of this playing out at a national level.


CH: Romania is a really instructive case. In 2024, you had false information generated at scale posted to TikTok – a heap of content regarding certain candidates allegedly influenced by foreign powers, all in support of a candidate called Călin Georgescu, who was allegedly backed by Russia. His win in the first round of 2024 Romanian election was ultimately overturned by the Romanian government and judiciary, and was the first real test of the Digital Services Act, with the support of the European Commission. But here's the thing – there was no evidence that the content was necessarily effective or that people believed it en masse. Of course, in anecdotal examples we saw that people were believing it, and that's horrifying. But there was no real evidence it was decisive.


JK: And what did the regulatory response actually look like on the ground?


CH: The Digital Services Act places a number of quite hefty obligations on online service providers. I prefer that term to ‘platforms’, because ‘platforms’ implies a lack of agency on the part of these actors. TikTok was hauled before the European Commission and issued with a please-explain. They collaborated with the EU and the Romanian government to try and address what had occurred. When the re-run elections were held in May 2025, TikTok hired Romanian-language content monitors for the first time. There's a great report from CeTAS (the Centre for Emerging Technology and Security) at the Alan Turing Institute that covers electoral disinformation in 2025 in detail. Mostly what we saw wasn't necessarily deepfakes. It was false content generated at scale. Memes. Volume. Speed.


JK: That's something I want to push on – because there's a common assumption that the deepfake is the threat. The hyperrealistic video. But what you're describing sounds more like a flood of low-quality content that just... overwhelms.


CH: Exactly. And watermarks aren't the silver bullet people think they are. There's a great example with Sora, OpenAI's video generation tool, which always generated watermarked content. And then people just cropped the video to remove the watermark and posted it. The existence of a watermark is not the answer to false information spreading. My colleague Sam Stockwell at the Alan Turing Institute has gone through and found examples from Poland, the Czech Republic, and Canada – deepfake videos, scam promotions using politicians' likenesses – and in most cases the hook was that they weren't watermarked. But that framing misses the point. The harm doesn't require technical sophistication. It requires reach.


JK: So what does actually work? Because when ClearAI wrote the Menzies Research Centre AI policy paper, one of our recommendations was digital literacy – and some pushback we initially got was: it's too sophisticated now, you can't teach people to spot this stuff. What do you think?


CH: I think that's wrong, actually. Media literacy is a core part of the answer and you absolutely can teach it. The Nordic states and the Baltics are the shining example of how you can build whole-of-society capacity against misinformation – and you do it by teaching critical thinking, not by telling people what to believe. You teach people to ask: is this piece of content trying to make me angry? If the answer is yes, that's a signal. Does it look a little blurry? Is the mouth sitting in a funny position? Are there other sources corroborating this? A reverse image search on Google is honestly one of the most powerful open-source intelligence tools available to a regular person. Low barrier to entry. Freely available. Devastatingly effective.


JK: I actually got duped by a deepfake recently – a video of a prominent Australian politician collapsing in the Senate – and I scrolled past it. Didn't investigate. It stuck in the back of my mind until the pollie themselves posted a correction. And I thought, if they hadn't posted that, it just would have sat there in my brain.


CH: So have I. And I think that's actually the key point – you don't have to be able to spot every piece of fake content. You have to have a level of calibrated scepticism. Is this making me feel something strongly? Is it designed to make me angry or afraid before I've even thought about whether it's true? Just taking a breath, stopping before you share is most of the battle. It's good life advice in general, but it's specifically good advice for navigating the information environment we're in now.


JK: I want to talk about Australia specifically. Because the eSafety Commissioner can fine up to $49.5 million. We've got voluntary codes. We're standing up an AI Safety Institute. And yet, I look at what happened with DeepSeek being banned in some countries, Grok's nudification feature being redlined in places like Malaysia, and I think, where are the teeth?


CH: eSafety has, for much of its life, relied on voluntary standards and the compliance of platforms to keep people safe. It's done this with social media, with child safety, and it's probably going to try to do the same with AI. That's never really been effective, if we're brutally honest. The number of times eSafety has put out comms asking industry, "Why haven't you complied with this voluntary code of conduct?" That's not a regulatory posture, that's a request. The fines are technically enforceable, but for the size of these firms you need to be charging maximum penalty units to make that land. And even then, for a company operating across multiple jurisdictions, $49.5 million is a rounding error. What's needed is stronger regulatory controls – whether that's an AI Act or mandatory guardrails that AI platforms must comply with to operate in the Australian market.


JK: And our new AI Safety Institute, what will it need to actually be useful?


CH: It needs teeth. There is only so much benchmarking of models you can do before people start asking: so what? You've measured that this model is more likely to generate harmful content – great. If the AISI isn't plugged into something with genuine enforcement power, the whole thing risks falling over. Allegra Spender [Federal Member for Wentworth in NSW] made this point really clearly – the Commission can't just be an organisation that writes reports. It needs a right hook. Done right, it's a very exciting possibility. But it needs to be willing to go toe-to-toe with industry in the interest of community safety, not scoring points for political reasons.


JK: Last question. For someone reading this who isn't a policy person, isn't a researcher – what's the one thing they should actually do?


CH: Be critical, not paranoid. For any piece of media you consume, regardless of whether it's a newspaper, a TikTok, a WhatsApp forward – ask yourself: is this designed to make me feel something before I've thought about whether it's true? If yes, pause. Use a reverse image search. Check whether other sources are reporting the same thing. The main tool for discerning what's real from what's manipulative is up here, in your head. And it's actually not that difficult to just not believe everything you're spoonfed.

CLOSING THOUGHTS

I left this conversation thinking about something Callum said early on – that we don't actually know if AI-generated mis/disinformation or deepfakes are working. If we can't measure the harm, we can't measure the fix either. We're flying a little blind. But what I do know is that the gap between what's possible and what's governed is real, and it's growing. Australia has the ingredients to do this well: a capable public service, a legitimate shot at an effective AISI, and homegrown researchers like Callum who understand both the technology and the policy levers. What's been missing is urgency. And teeth.


I join many other voices in saying that the AISI needs genuine enforcement power, not just benchmarking capability. Voluntary codes have had their chance.


But to industry, I’d say that they shouldn’t be treating governance as a PR exercise. If you're operating in the Australian market, you have a civic obligation to the people in it. Hire the researchers. Build the policies. Or expect regulators to do it for you, eventually, and less elegantly.


For the rest of us: the workers, the uni students, the executives, the parents, the people just trying to figure out what's real:

  • Pause before you share. Anger is a signal, not a prompt to act.
  • Use Google reverse image search. It doesn't take long and it works.
  • Check whether other sources are reporting the same thing before you let it settle into your memory as fact.
  • Treat Facebook, TikTok and other social media sites as entertainment, not news infrastructure.
  • Ask your local MP where Australia stands on mandatory AI safety standards for the tools millions of us use every day.

Take this further. Download our extended resource and explore how human-centred intelligence can work inside your organisation.

OTHER INSIGHTS