The Albanese government’s announcement last month on the potential for artificial intelligence (AI) regulation marks a pivotal moment for Australia. The proposal, while not yet legislated, lays out a framework for how AI may be governed nationally in the future, with broad industry consultation essential in shaping the final outcome. 

AI regulation of some form is necessary as it’s the role of the government to protect the public, especially when it comes to emerging technologies that carry potential risks. However, as with any regulatory framework, the devil is in the detail, especially in a field that is evolving so quickly. A key concern, for me, is whether the regulations will effectively address the underlying risks. 

There is the risk that overly prescriptive regulation could stifle innovation and come at a heavy compliance cost to small Australian-based businesses. Clear and consistent guidelines, particularly at a global level, can help mitigate these challenges, allowing businesses to focus on innovation that could actually result in beneficial outcomes in high-risk use cases, like candidate selection.

It’s clear that AI regulation will likely become a defacto global expectation, and Australia must decide how to balance its approach to encourage innovation while safeguarding against the risks AI can introduce. The European Union, for instance, already passed the EU AI Act, and I would argue that it is very prescriptive on AI development techniques but doesn’t really address the underlying risks of bias in the recruitment space. The UK seems to be moving in a similar direction. 

For companies like JobAdder operating globally, inconsistent international regulation, and lack of enforcement by authorities, could also become a disadvantage against companies that don’t have these rules in their home market, as they are not being held to the same level of scrutiny. Therefore, it is critical that any regulation of AI that applies to Australian-based businesses should apply equally to overseas players that provide their services in Australia and should also include a strong enforcement mechanism so that we are all held to the same regulatory bar. 

Recruitment inherently involves making choices between many candidates, only one of whom will be successful, so the risk of unfair biases is high, even when humans alone undertake this selection process.

However, we do think that in the future, there is a role that AI can play in expanding the diversity of candidate pools to in fact reduce the unconscious biases that could exist in selection made by a human alone. There is the risk that laws that classify the whole use case of “candidate selection” as high risk, reduce incentives for businesses like ours to try and innovate in this space.  

Given the current uncertain regulatory landscape, and the potential compliance burden in use cases that are classified as “high risk” we’ve made a strategic decision to, for the time being, focus our AI investment in areas where AI can be used to automate and streamline everyday tasks for recruiters.

For example, we use AI to extract information from CVs to help recruiters match a skillset to a role more efficiently. We also use AI to draft job advertisements, ensuring all relevant information is included, but leaving the final review to human recruiters. We think that these “efficiency” use cases can have a high impact for recruiters, and we can also deliver them without the distraction of the current regulatory uncertainty.

We are investing heavily in generative AI but we’re yet to find a killer application that revolutionises recruitment, at this early stage. We are, however, actively testing several hypotheses over the coming months to identify where it can make the most significant impact. 

AI, today, is prone to ‘hallucinations’ but it’s becoming more effective every day, however, there is no path where it would become 100 per cent fault-less. This means that in all cases where the stakes are high, and there are many in the recruitment industry, ensuring that a human can control the AI outcome is critical. AI should be viewed as a complement to human judgment, not a replacement. 

With appropriate industry input and feedback, the Albanese government’s proposed regulations have the potential to result in responsible development, deployment and use of AI in Australia. While it’s too early to say what the final regulations will look like, one thing is clear: regulation is coming, and companies must be prepared. 

Joel Delmaire is chief product officer at JobAdder.