Agency in the age of AI

WRITTEN BY JISOO KIM

INTRO NOTE


When I was a civilian policy adviser in Defence, I had the privilege of participating in a number of military exercises. Playing alongside my Army, Navy and Air Force colleagues, my role was to provide diplomatic and foreign policy advice to navigate a scenario as the 'blue team', pitted against the enemy on the 'red team'. These exercises were designed by a 'white cell' – a neutral team of planners writing the scenario, targeting our weaknesses and vulnerabilities to see how we would respond.


On my first exercise, I was bogged down arguing the scenario was unfair, possibly unrealistic. My ever-patient and more experienced military colleagues told me it was best not to fight the scenario – the script handed to us by the white cell – which became a mantra I had to repeat to myself: "Don't fight the white." In doing so, I felt a loss of agency but I went along with it for the sake of the exercise. 


Sometimes, reading AI news feels exactly like this. A ‘white cell’ somewhere in the US or China is writing the scenario – and it can feel like we have no agency here in Australia. Yet we do. This isn’t a simulated exercise. This isn’t a game with a rigid script. We can change the outcome. We can vote with our feet, send strong signals to our leaders, say no to big tech, and prepare our institutions.
We can fight the white.


I wanted to delve into this further, particularly after I caught up with Associate Professor Michael Noetel to discuss UQ’s latest Survey Assessing Risks from Artificial Intelligence 2025. Amongst key findings, the study found:


Amongst key findings, the study found:

  • Australians want robust safety measures (89% would trust AI more if there were mandatory safety testing, independent audits, an AI Safety Institute [which the Government is currently setting up]), and
  • Demand for transparency and media coverage is strong (80-85% want more reporting on AI’s societal effects and on how government is managing AI regulation).


I believe there are five main sentiments that drive the distrust underpinning the study’s findings:

  1. We’ve been burnt before.
    Big tech promised social media would bring connection. Instead it delivered anxiety, addiction and polarisation. People aren’t being paranoid applying that same lens to AI.
  2. It speaks like us and sounds correct. But it isn’t us and it can be wrong.
    Natural language was made for humans so it’s disconcerting when a machine can speak it fluently, and also has the audacity to make mistakes so confidently with no ‘remorse’.
  3. We can’t trust what we’re seeing anymore.
    AI generated content is actively degrading the information environment we share. When hyperrealistic synthetic media ‘floods the zone’, healthy scepticism tips into not being able to believe anything. The shared reference points holding societies together come under assault.
  4. We don’t trust other humans holding the tool.
    AI doesn’t cut jobs. Employers do. AI doesn’t surveil citizens. Governments do. AI widens existing power asymmetries between the people making decisions and the people living with them.
  5. We’re being told something is coming that we can’t imagine.
    AGI. Superintelligence. Systems that exceed human capability across every domain. The people building it can’t agree on when or whether it’s safe or what it even means. Do you feel uneasy yet? But also, did anyone even ask for this?


Now, let's jump into the interview.

THE INTERVIEW

JK:What are Australians most worried about with AI right now?
MN:
A lot of it comes down to safety and trust. People are excited about the benefits, but they don’t feel secure. They’re not sure if the systems meet their safety expectations, and they don’t know who to trust. Our data shows it’s not just a skills gap – people have genuine safety concerns and feel like the guardrails aren’t there yet.


JK: Why does trust feel so fragile in the AI space compared with other tech?
MN:
Trust matters more when something is unfamiliar. People didn’t trust elevators when they first removed the human operator. They didn’t trust smartphones or social media when they arrived either.

We need experience with a technology to calibrate our trust. Historically, we learned the hard way with planes and cars. Early planes really were dangerous. Cars were extraordinarily dangerous before seatbelts (and they’re still risky today – roughly a 1% lifetime risk of dying in a car crash).

In a way, we should be less trusting of familiar dangerous things, like cars and backyard pools, and more trusting of some new technologies that are actually safer. With AI, people are scared largely because it’s new. That fear is understandable, but it should be balanced against the actual base rates and statistics, not just scary stories.


JK:To help with this, is our government showing leadership on AI?
MN:
On some technologies, yes. The social media ban for young people is a good example. The evidence was pretty clear that social media, on average, harms adolescent mental health. Our government looked at the data, saw that this was both necessary and popular, and led the world with the ban. That shows we can do evidence‑based tech policy and even lead globally.

On AI, they’re doing some good things – putting effort into training people and incentivising business adoption, and standing up an AI Safety Institute to fund experts to assess risks. But when you look at the serious risks on a 1-5 year horizon, there hasn’t been much substantial movement yet.


JK:What AI safeguards are still missing in Australia?
MN:
A really obvious one is safety testing for the biggest, most capable models before deployment. California, for example, has brought in a rule that the really large, potentially dangerous models need to be tested for safety before they’re rolled out. That’s completely normal in every other high‑impact industry: food, pharmaceuticals, engineering and aviation.

We always expect serious safety testing before powerful technologies are deployed at scale. We don’t have that same culture or standard yet for AI in Australia, and it’s strange, because it’s such a basic, common‑sense expectation.


JK: How do AI companies themselves feel about regulation?
MN:
Many of them actually want a level playing field. Some companies are already spending real money on safety testing, risk assessments, and audits. Others are doing an abysmal job and just racing ahead.

If one company invests in safety and another doesn’t, the unsafe one can move faster and pick up more market share. In any other sector, we would regulate the renegades. That’s essentially what thoughtful AI regulation is trying to do:

set minimum safety standards, require testing and transparency and stop a race to the bottom on risk. It’s not about banning everything. It’s about holding AI to the same standards we expect from any other powerful technology.


JK: Going back to Aussies having low trust in AI, how big a role does data privacy play in that?
MN:
It’s massive. It’s one of the top things stopping people from using AI at work. They see that the tools are amazing, but they don’t know where the data is going, and unless you’re already a privacy or data expert, it’s really opaque.

We also see that people are often worried about the wrong things. Some won’t use WhatsApp because it’s owned by Meta, but they happily post all their kids’ photos and birthdays on Instagram. What we really need is calibrated trust. Be doubtful where you should be  – like companies clearly using your data to sell ads. Be trusting where you should be – for example, paid AI tools with training turned off, where data is held briefly then deleted.

If it’s true that using a modern AI model with training turned off is as safe as sending a Gmail, that would be really comforting for people, because they’ve been using Gmail for 10-15 years without worrying too much about data security.


JK: So is this just a communications problem or do our privacy laws need to change?
MN:
Probably both. Either we need laws that are better adapted to AI so you can clearly explain to people what’s okay and what’s not, or we need to translate the existing laws into language and analogies people actually understand. Right now, it’s too hard for a normal person to know whether it’s safe to put their data into an AI tool.


JK: Going back to what you said about 'calibrated trust'. How does that play out with everyday AI use?
MN:
A good example is someone doing their tax. I spoke to a guy whose wife was spending the whole weekend sorting tax receipts. I threw Claude’s Cowork feature at a folder of receipts, made a coffee, and came back three minutes later – it had filed them exactly the way he would have expected.

But he still felt nervous about his tax data “going up into somewhere”. In reality, that kind of use – especially with a paid subscription and training turned off – is closer to how we already use Gmail. Your email is in the cloud, it can theoretically be subpoenaed, but day‑to‑day, it’s a reasonable, well‑understood risk.

A lot of what we’re seeing is a skill and literacy issue, not necessarily a law issue – like people accidentally sharing ChatGPT chats publicly so they get indexed by Google. That’s more like having your Google Docs set to “anyone with the link” and then blaming Google when it leaks. People need both clearer rules and better mental models.


JK: Beyond privacy, why are some people resisting AI in their work?
MN:
Job insecurity is a big psychological driver. I’ve got a colleague who built a tool that can plan a whole semester of university teaching content. The first reaction from some staff wasn’t: “Great, this saves time,” but: “What does this mean for my job?”

There’s a famous idea that a person can’t believe something if their salary depends on them not believing it. For some people, using AI feels like cooperating in their own redundancy. That creates a strong, very human resistance.

I don’t know exactly what the employment market will look like in five years. But my personal solution isn’t to avoid AI; it’s to expect governments to build a safety net and for society to adapt. If anything, avoiding AI just puts you further behind the curve on tools everyone will need to be familiar with.


JK: How are people generally responding to AI right now?
MN:
Broadly, I see three rough profiles. The first are those in denial, and always say: "This won't happen." These are the people who said AI would never do creative work, and then we got Midjourney and modern language models proving them wrong. The second are the fatalists. This group thinks there’s nothing we can do, so we might as well just enjoy it while it lasts. That’s pretty defeatist. The last group are agency-focused. They think, “This is coming, and I can influence it.” This is where I try to sit. Yes, AI is coming, and yes, it could be huge, but things we do now can influence both our personal futures and society’s trajectory. It’s not about ignoring the problem or admitting defeat; it’s about recognising we still have agency.


JK:What would 'good leadership' on AI look like in Australia over the next 6-12 months?
MN:
In a perfect world, we’d see clear, enforceable safety standards for high‑risk models, mandatory pre‑deployment testing for the biggest systems, independent auditing – through something like a well‑resourced AI Safety Institute, and communication that translates all of this into normal language Australians can understand.

The goal is to create an environment where people feel safe to use AI – at home, in business, in government – because they trust that the systems meet reasonable safety expectations. Just like aviation safety helps us feel okay getting on a plane, AI safety standards could help Australians feel okay putting AI to work.

CLOSING THOUGHTS

I left that conversation with a clearer view of what fighting the white looks like from where we stand. In Australia, we have the luxury of not having a Silicon Valley-Wall Street nexus forcing the hand of leaders. We’re a trusted middle power sitting on the sidelines of the Great Power Competition with many other likeminded partners. We can use this to our advantage. We can shape our own rules and norms around how we use AI. 


We shouldn't just be going along with what is happening just because we’re not writing the scenario. There's a lot we can do to understand the AI landscape and what it means for our interests: which tools are actually safe, where our data is going, which laws need changing, which global partnerships matter.


On an individual level, simple steps to increase agency include:

  • Choosing tools from companies whose values you're willing to back,
  • Just saying no – like many of us millennials are doing now with Instagram and TikTok by deactivating our accounts and living our analogue lives,
  • Reading the terms and conditions to see where your data goes or where the opt-out options are,
  • Asking your employer whether there are genuine feedback loops for staff to voice their concerns with AI,
  • Telling your local MP that mandatory AI safety standards are a reasonable ask – because most Australians would agree, and
  • Staying informed and making decisions with the view that agency is better than denial or fatalism.

It all adds up to getting as much of a full picture as possible from those you trust – which is what we hope to provide at ClearAI.


We have to remind ourselves that this isn't a simulated exercise. The scenario isn't fixed. We have agency. We can fight the white.

Take this further. Download our extended resource and explore how human-centred intelligence can work inside your organisation.

OTHER INSIGHTS