The Australian government has recently announced its plans to regulate ‘high-risk’ areas of artificial intelligence (AI) implementation instead of implementing a single comprehensive AI regulatory law, as seen in the European Union. This decision comes in response to feedback received from the Safe and Responsible AI in Australia consultation conducted last year, which received over 500 submissions. While there is enthusiasm for the potential of AI tools, concerns have been raised regarding potential risks and the need for regulatory safeguards to prevent harm.
The focus of the Australian government’s regulation will be on areas of AI implementation that have the greatest potential for harm. Examples of high-risk areas include workplace discrimination, the justice system, surveillance, and self-driving cars. To support the development of regulations in these areas, a temporary expert advisory group will be established.
One of the challenges in regulating AI is that many AI tools are already being used in Australian homes and workplaces without specific regulatory guidelines. A recent report by YouGov found that 90% of Australian workers use AI tools for daily tasks, despite the presence of limitations and flaws. These tools have the potential to hallucinate and present false information, while the lack of transparency surrounding training data raises concerns about bias and copyright infringement.
To effectively regulate AI, it is important to identify high-risk settings. Risk, in the context of AI, is not absolute but rather exists on a spectrum. Contextual factors that create the potential for harm determine the level of risk. For example, while knitting needles pose little risk in everyday life, they are considered dangerous tools in airport security settings and their use is restricted to prevent harm.
Understanding how AI tools work is crucial in identifying high-risk settings. Organizations need to be aware of the potential for gender discrimination in hiring practices enabled by AI tools, while individuals need to recognize the limitations of AI, as exemplified by the American lawyer who relied on fake case law generated by ChatGPT. Risks associated with AI tool usage should be managed alongside risks posed by individuals and organizations.
In its response, the Australian government highlights the need for a diverse expert advisory body on AI risks. This body should include representatives from industry, academia, civil society, and the legal profession. Within industry, membership should encompass various sectors, including healthcare, banking, and law enforcement. Academia should contribute not only AI computing experts but also social scientists specializing in consumer and organizational behavior. These experts can advise on risk analysis, ethics, and concerns related to adopting new technology, such as misinformation and privacy issues.
In addition to managing current AI risks, the government also needs to address potential future risks. This could be achieved through the establishment of a permanent advisory body that can manage risks associated with new technologies and new uses of existing tools. Such a body could also provide guidance on AI applications with lower levels of risk where limited or no regulations are currently in place.
Misinformation is a key area where the limitations of AI tools are known. To address this, individuals need to develop strong critical thinking and information literacy skills. Transparency in the use of AI-generated images can also help prevent consumer deception. While the government’s current focus is on transparency in high-risk settings, further advice and regulations are likely needed to address these issues comprehensively.
By regulating high-risk areas of AI implementation, the Australian government aims to ensure the responsible and safe use of AI while harnessing its potential benefits. With the establishment of an expert advisory body and collaboration across different sectors, Australia is taking essential steps toward successfully regulating AI and managing associated risks.
*Note:
1. Source: Coherent Market Insights, Public sources, Desk research
2. We have leveraged AI tools to mine information and compile it
Ravina Pandya, Content Writer, has a strong foothold in the market research industry. She specializes in writing well-researched articles from different industries, including food and beverages, information and technology, healthcare, chemical and materials, etc. With an MBA in E-commerce, she has an expertise in SEO-optimized content that resonates with industry professionals.