Lawyers blast government delay to AI regulation

By Sarah Jensz | 05 Oct 25
AI

Young people, those with disabilities and people from diverse backgrounds are at serious risk of missing out on job opportunities as a consequence of delays to laws regulating the use of AI, legal experts say.

With new research showing discrimination in hiring practices due to inbuilt AI biases may be widespread, they are calling on the federal government to resist pressure from business interests to avoid AI regulation and move quickly to shore up public confidence and prevent further harms.

“While consultations are happening, the technology is continuing to move forward and real harms [are] actually being done to people,” says the Australian Human Rights Commissioner Lorraine Finlay.

The Human Rights Commission has advocated for four years for guardrails around the use of AI. Last year, the body submitted eight recommendations to the federal government, including a human rights-centred, risk-based approach and a dedicated AI act.

The EU introduced the world’s first AI act last year, founded on a risk-based framework that included prohibited AI practices, high-risk AI, transparency obligations, and minimal-risk AI.

Finlay says she is dismayed that AI regulation has again been delayed in the wake of last month’s economic reform roundtable, held at parliament house in Canberra, where Treasurer Jim Chalmers announced a “gap analysis” to identify if existing legislation can effectively regulate the technology or if a dedicated AI act is needed.

“The need for this reform has only gotten more urgent over the last few years,” Finlay warns.

“There comes a point where we need to stop consulting and … deliver action on the ground to make sure protections are in place.”

University of Melbourne research published in May has backed Finlay’s call for urgent reform to ensure the rights of disadvantaged groups are protected.

Legal academic Dr Natalie Sheard’s research has found AI hiring systems contributing to widespread discrimination in the local jobs market against people with English as a second language, people living with a disability, and those under the age of 25.

Biased training data and the use of “proxy” features that stand in for protected attributes like gender, ethnicity, age or disability, were among the ways AI systems were distorting fair recruitment, according to research published in the Journal of Law and Society.

Sheard’s case studies included HireVue, a US-based, AI-enabled video interviewing and assessment platform, used to conduct over one million job seeker interviews in Australia in 2022.

Just six per cent of HireVue’s data used to train the AI came from Australia and New Zealand, and nearly 80 per cent from the US. First Nation Australians were entirely absent from the dataset.

“The ideal candidate created by these systems is mirrored on dominant groups in society … white, male, non-disabled,” Sheard says.

Transcription errors in HireVue interviews were significantly higher for non-native English speakers, the research showed, with error rates rising to 12–22 per cent compared to less than 10 per cent for US English speakers.

“Anyone who is unrepresented in the training data or who is experiencing present day or historical discrimination, which again is [embedded] in the data that will be used to train the model, could experience discrimination,” she says.

About 61 per cent of Australian organisations used AI “extensively or moderately” in their recruitment process this year, according to the Responsible AI Index, an Australian research consultancy. Less than half reported “high” engagement with diverse stakeholders to identify and mitigate potential bias and discrimination impacts.

Sheard says AI programs also create opportunities for “intentional discrimination, with evidence that some employers use the systems to weed out certain categories of people.”

“It’s masked, it’s cloaked. It’s very hard for applicants to know that it’s happening to them.”

Whether AI or human-enabled, discrimination is against the law. The lack of transparency in AI systems and uses needs urgent review, Sheard argues.

“There are a lot of barriers for employers to ensure that they are deploying a tool that doesn’t discriminate,” she says. “It might be that employers do have to make the decision… that they don’t deploy these tools.”

The Productivity Commission has opposed the introduction of AI legislation, warning it could limit an economic growth opportunity for Australia.

Its latest report estimates AI could add more than $116 billion to Australia’s economy and boost labour productivity by over 4 per cent over the next decade, a significant increase from the current annual average of 0.9 per cent.

“Increasing efficiency in itself shouldn’t be a goal if it comes at the expense of human rights,” says commissioner Finlay.

She argues that embedding human rights into AI law is “good for business” because it builds public trust in emerging technologies.

According to the Business Council of Australia report there is a high level of scepticism amongst Australians about artificial intelligence.

“Creating a safe, trusted environment around which AI operates actually helps [businesses] drive greater productivity gains,” Finlay argues. “

“That’s because people feel that they can embrace the technology and they’re confident in what it will deliver.”