Australia is rapidly falling behind in the AI arms race, as it struggles to respond quickly to the changes that have been predicted for decades.

What follows is our submission on 2024-05-09, to the Senate's Select Committee on Adopting Artificial Intelligence (AI)

A robot


Fusion thanks the Select Committee on Adopting Artificial Intelligence for the opportunity to make a submission about how AI should be adopted in Australia.

FUSION: Science, Pirate, Secular, Climate Emergency represents over 1,700 members who believe in technological advancement, climate action and civil liberties.

This submission has mainly been written by Owen Miller. Mr Miller graduated in 2012 from the University of Sydney, studying BSc (Computer Science) and BEng (Mechatronics) (Hons I). His career as a software engineer has included roles at Amazon, at the Defence Science & Technology Organisation (DSTO) and at startups in Sydney and New York.


Australia’s potential in Artificial Intelligence is vividly illustrated on our $50 note, featuring inventor David Unaipon. For decades, we've valued innovation, whether it's the mechanical shears depicted on the note, the first cloned sheep, or the invention of Wi-Fi. David Unaipon didn’t hold back − alongside the shears, he even pursued the ambitious dream of powered flight.

Much like the Wright Brothers, who challenged conventional wisdom to achieve the first successful flight, Unaipon pushed boundaries and dreamed big. He envisioned possibilities beyond the constraints of his time, proving that innovation thrives when we defy preconceived limits. If Australia is to make a significant impact in the world of AI, we must also challenge existing notions and embrace bold thinking.

The $50 note also serves as a reminder that there is money to be made here if we choose to see it! By investing in AI, Australia can unlock tremendous opportunities.

The real risk lies not in AI itself but in missing the opportunity. If we fail to leverage AI, we risk being outpaced by the rest of the world, then pushed into economic subservience.

While governments and the media have often focused on the potential dangers, it’s crucial to explore the upsides of profit and enlightenment too. In this submission, we'll address the opportunities and challenges as outlined in the terms of reference, and conclude with practical recommendations for harnessing the future of AI in Australia.

Bias and Discrimination

Although AIs might dream up new animals that don’t exist, or people with 7 fingers, the only time any patterns are labelled as “biased” are when the AI spits out a known, taboo stereotype from human society.

Maybe the stereotype is correct, in which case we’d agree with Socrates that the pursuit of knowledge and truth is most moral, and that we therefore shouldn’t lie as a way of trying to take the moral high ground.

But if the stereotype is incorrect, then how can we blame the AI for the fault? It’s garbage in, garbage out, is it not?

Where are AIs (or humans, for that matter) supposed to get unbiased, insightful training data?

The real problem here is that libraries have devolved from centres of civilisational wisdom into merely an afterthought.

The Round City in Baghdad, home to the House of Wisdom.

When the Internet (especially Wikipedia) usurped encyclopaedias and libraries as the most practical source of information, it was left up to citizens to create this themselves, with none of the government funding or attention that had historically been given to more physical predecessors.

When scientific journals went online, they have continued to live behind a paywall, despite their printing and distribution costs disappearing. If government funding is enabling this research, then why aren’t citizens able to access it freely? When Aaron Swartz tried to spread this research, he was arrested by the FBI and pushed to suicide.

But even if citizens could freely read the research their taxes had funded, is there any point? The current approach to academic grants has incentivised researchers to churn out worthless scientific papers that can’t be replicated.

The end result is that the pursuit of knowledge has unfortunately become a public common where nation states are now free riders − they are not prepared to put in real effort into maintaining something that benefits the whole world. It’s companies who are pushing forward the advances in medicine and computing, especially in regards to AI research.

We see this issue in software too: despite the widespread acclaim for open-source software, there are few entities chipping in to pay for it, and we repeatedly (Heartbleed, XZ Utils) see major bugs that can be attributed to the fact that critical pieces of technology are maintained as side-projects or as under-staffed, under-funded labours of love.

If we can’t depend on these public commons, it’s not because the authors didn’t care or because they were biased: it’s because their consumers are not being forcefully compelled to actually fund the public commons that their entire society depends on.

Australians’ physical lives may exist in a democracy, but their online lives are largely governed without their input − it’s these unloved open-source components, and exploitative American tech companies. When it comes to AI, it costs so much to train that we have no choice but to take what’s created by large tech companies like Facebook, AnthropicGoogle or OpenAI.

What if Australia foots the bill for these public commons? What if Australia pays people to work on open-source software and open knowledge? Sure, other countries would benefit too, but the return on investment would be enormous. When Instagram was purchased by Facebook for $1b USD, it only had 13 employees. Look how many people were involved in creating Linux, or Bitcoin.

Compare this to the eye-watering sums that are going down the drain, trying to create a Ministry of Truth to boss around Facebook, or drafting legislation that forces Facebook to pay Australian publishers of news-tainment.

Why didn’t you just fund 13 software engineers to create something better?


Instagram in 2012. Source

International Approaches to Mitigating AI Risks

In thinking about misinformation as a risk from AI, the logical conclusion is that we urgently need to clean up all the misinformation in our society that’s being fed into our AIs. On this front, there are two main issues that stand out:

The slowing of science has been widely discussed, but let’s dive deeper on this notion of how news-tainment is affecting AI.

When newspapers were the best way of staying in touch with the world, it made sense for them to feature obituaries and job advertisements. Then the earnings from these public notices could subsidise the investigations into important stories, which might not have generated enough revenue themselves, to publish the paper. From this angle, there is some merit in the Australian government’s scheme to collect money from Facebook and Google when they show news.

But part of the reason why classifieds and obituaries no longer appear there is not just because there are better ways of presenting this information − it’s also because newspapers have already made the transition away from getting ads to support real news. Their solution to “how do we subsidise investigative journalism?” was to just stop doing investigative journalism, and start publishing news-tainment.

So sure, the idea of Google paying for news sounds nice in theory, but now Google is paying companies who publish gossip about the hidden meaning behind Megan Markle’s dress. Where is the line between journalists and gossip bloggers? These are not the sort of people who are going to put on a bullet-proof vest and visit the Middle East. No wonder the government is so willing to create a Ministry of Truth − there was nobody really interested in going after the truth in the first place.

The ABC was meant to serve as an unbiased source of real news, but they’ve bizarrely decided that they should join the competition for news-tainment.

ABC “News Breakfast” discussing picnic food

When other channels pursue this strategy, at least they’ll collect ad revenue. But what is supposed to be the reward for the ABC covering gossipy smalltalk topics, and running documentary reports that actively mislead voters into thinking that only two political parties exist?

The way that this affects AI is that it can no longer get trustworthy facts for its training data, it can only learn gossipy nonsense. If all you wanted was an AI for managing your picnic, then that’s great, but if you’re worried about AI feeding people misinformation, then you need to follow the lead of what some other countries are doing:

Qatar 🇶🇦

In 1996, a relative of the Saudi King tried to thwart a BBC documentary, leading to the closure of the BBC Arabic TV station. In response, the Emir of Qatar provided a loan to sustain the staff for 5 years under a new organisation: Al Jazeera.

The organisation continued covering international news and maintained its habit of covering politically sensitive issues, including the Arab Spring, which saw it banned in Tunisia.

Qatar is arguably not the only or perhaps even the main beneficiary from funding an international news organisation. By funding Al Jazeera, the main benefit has been the end of the one-way flow of information from the “West to the rest”. We see such an effect here too, with Australians paying closer attention to US events and US politics than to Australian politics.

Ecuador 🇪🇨

When journalist & Australian citizen Julian Assange was arrested as a political prisoner, it was Ecuador who protected him.

The attack on press freedoms was not entirely successful: Wikileaks is still operating, and it has undoubtedly played a part in inspiring other groups such as Bellingcat.

Access to truth is absolutely necessary if AIs are supposed to learn an unbiased view of the world.

File:Assange speech at Ecuador embassy.jpg

Julian Assange at the Ecuadorian embassy in London, 2012. Source

Taiwan 🇹🇼

The g0v (“gov-zero”) movement in Taiwan is an open-source collaboration started in response to the perception that government websites were substandard in their provision of information to citizens.

The result was that the government actually started merging these citizen contributions into the official websites. The movement has spread to Hong Kong and Italy and the movement has taken on more projects, including Cofacts, a collaborative fact-checking bot.

Tools for AI − Singapore 🇸🇬

In May, Singapore launched AI Verify, a testing framework for AIs to check whether they meet various AI safety goals. Unfortunately, the tool seems to be closed-source and requires potential users to submit a form for approval − such an approach is reminiscent of Australia’s squandered potential with the Trusted Digital Identity Framework.

Opportunities for Benefitting Citizens and the Environment

AIs are beneficial not just because they augment the thinking abilities of humans, but most critically, because they help us bridge the gap with computers, so that we can get their focused thought, reliability, and scalability.


Such computing benefits can give humans help in ways that were not being achieved before, because it didn’t directly benefit a government or a profit-maximising company:

The modern economy

As humans lean on AI for societal benefit, it’s totally feasible that they’ll inadvertently create compelling market offerings at the same time, and if we think that Australians are likely to create scalable, useful inventions with AI, then it would make good financial sense to free up as many Australians as possible to pursue such endeavours.

If Australia was to implement a Universal Basic Income, then they could make the most of AI, instead of needing to worry about AI stealing their jobs and, in turn, their ability to live independently.

The reduced relevance of job security could similarly benefit scientific discovery in Australia − with grants being less necessary, researchers could more easily focus on what they’re most passionate about, instead of what’s most likely to get them over the next funding hurdle.

Opportunities to Foster a Responsible AI Industry in Australia

When we say that an AI is “responsible”, it should really be made explicit that we want an AI with Australian values. That is only going to happen if we have a multitude of Australians working on it.

We mentioned earlier that AIs are trained on the zeitgeist and they’re reliant on open-source projects. With a Universal Basic Income, society could self-organise around the immense shared goal of winning the race against misaligned AI and their misaligned host nations. This aligned AI would in turn improve Australia, in a virtuous cycle.

This is in contrast to Adam Smith’s Invisible Hand, which sees people self-organising around the goal of exploiting each other just to put food on the table.

Potential Threats to Democracy

When the Arab Spring was getting started, there were people (including Google employees) who were crediting Google Maps as having played a part in what seemed at the time to be a success. The reasoning was that now ordinary people could see the lavish palaces owned by their leaders, and this helped encourage them to riot.

Following this logic, it’s easy to see a similar risk in the West − with sudden access to much greater knowledge and insight, it will be ever clearer that our governments for decades have been willing to kick big problems down the road, preferring band-aids that merely appease disconnected voters until some point after the next election.

People will realise that a Ministry of Truth is always part of a spiral towards societal decay and villainous tyranny.

People will realise that our decades of corporate welfare to oil & gas companies has contributed to the regular bleaching of the Great Barrier Reef, and the imminent inability for Queenslanders to insure their homes against climate disaster − just like what’s already playing out in California.

Insurers pulling out of California. 2024-05-09

Once such deep distrust arrives, then how is any other government supposed to take the reins and be trusted by the public? Will the only hope for Australia be a potential annexation by New Zealand?

The only logical way to stave off such disaster is to rapidly transition away from the short-term band-aids, towards real democracy, where political rivals are actually able to compete in a contest of ideas.

Even if the government manages to entirely ban AI from Australia, are you going to ban people from travelling abroad and chatting with an unfiltered AI for a day? You need to choose now if you’re going to follow the path of Socrates, or of North Korea.

AI is not an isolated problem that exists only within a circuit board: robots are emergent properties of our society, and any problems they exhibit could’ve already been exhibited by intelligent humans given poor or misaligned education. Should you be fixing their education, giving them access to reliable facts and knowledge, or should you worry that their intelligence is getting out of hand, and put some lead in the water supply?

We can get the best results from this new intelligence if we treat robots and their trainers with the respect they deserve.


  1. All government-funded scientific research should be freely accessible.
  2. All government-funded software should be open-source.
  3. The Digital Identity System should be opened up to 3rd-party developers, so they can verify users as unique humans without relying on phone companies or other unreliable proxies.
  4. Australia should implement a Universal Basic Income, perhaps in the way advocated by Basic Income Australia, on the understanding that it will ease the chaos in the job transition, and will democratise innovation in the fast-paced world of AI.
  5. Australia should stop delaying the inevitable, giving corporate welfare to companies and workers who are being replaced by technological advancement. If these workers need a way of providing food and shelter for their families, that should not be tied to their ability to restrict progress.
  6. To ensure unbiased truth, Australia should fund a broader (potentially foreign) group of media organisations, not just the ABC and SBS.
  7. The government should encourage the National Anti-Corruption Commission (NACC) to actually start publishing, thereby regaining public trust in our institutions.
  8. Australia should steer away from censorship bodies, especially the “Ministry of Truth”, and should instead fund viable open-source competitors to existing public squares.
  9. Australia should urgently fund computing power, electricity and staff for training AIs to compete against foreign and corporate-sponsored AIs with inherent disregard for Australia’s needs.