IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image
Europe contemplates new rules for AI – and what this might mean in ANZ
Mon, 31st Aug 2020
FYI, this story is more than a year old

At the beginning of 2021, the European Commission will propose legislation on AI that will be, at first instance, horizontal (as opposed to sectoral) and risk-based, with mandatory requirements for high-risk AI applications.

The new rules will aim at ensuring transparency, accountability and consumer protection, including safety, through robust AI governance and data quality requirements.

Europe's approach to regulating technology is based on the precautionary principle, which enables rapid regulatory intervention in the face of possible danger to human, animal or plant health, or to protect the environment. This perspective has helped Europe to become a global leader in the shaping of the digital technology market.

Particularly, with the introduction of the General Data Protection Regulation (GDPR) in 2018, Europe considers it has gained a competitive advantage through the creation of a trust mark for increased privacy protection. 

How will Australia and New Zealand be impacted?

Australia and New Zealand have a close relationship with the European Union (EU) and its member countries historically. They share a commitment to democracy, the rule of law and a respect for human rights. Not surprisingly, the ongoing discussions on AI ethics reflect similar concerns and objectives.

When Europe legislates on AI, it is inevitable that Australia and New Zealand will be impacted. Europe's strong trading partners with shared values will want to benefit from trusted AI developed in Europe.

Equally, AI developed in Australia or New Zealand should cross national borders without burdensome obligations, particularly for smaller providers. This requires similar rules on AI development and use.

Considering the breadth of AI technologies and applications across all sectors, Europe has embarked on a truly challenging venture. If AI is defined broadly, the law risks becoming unnecessarily burdensome without bringing benefits for many AI applications where trust is not relevant, e.g. industrial applications.

A narrow definition of AI, on the other hand, may not provide futureproof protection given the pace of technological evolution.

Other challenges relate to the type and level of risk that should be regulated. The new AI rules will need to target legal gaps related to the risk that AI applications may pose to physical safety. These gaps may, for example, exist in rules on liability and compensation rules.

Another type of risk to be addressed by AI legislation concerns human rights, for instance, including discrimination and privacy protection. The identification of the criteria for ‘high' risk AI applications will set a crucial threshold for the application of the new AI law.

Europe's new AI rules are also expected to require an ex-ante conformity assessment before a product or service is placed on the market.

Because AI systems are not “static” products, repeated assessments may be necessary to manage compliance over the AI system's lifetime.

Options related to compliance and enforcement mechanisms range from the creation of a new AI regulator to the introduction of certification bodies, accreditation schemes and training programs for the testers.

The associated costs for European AI developers may be outweighed by job creation and a new demand for skills that would enhance Europe's global competitiveness.

The long list of issues in this regulatory debate include whether the public use of biometric identification systems, including facial recognition, should be restricted unless “allowed by law” or whether the use of AI-enabled lethal autonomous weapons systems should be banned.

Therefore, Europe's new AI rules have the potential to set a global standard, at least for Europe's trading partners. Interestingly, both Australia and New Zealand have initiatives in place that may put a spin to a potential spiral effect of Europe's new AI rules.

Australia and the CDR

Australia recently introduced the Consumer Data Right (CDR). The CDR, which applies initially to the banking sector, aims at improving consumers' ability to compare and switch between products and services.

The introduction of the CDR is a unique approach addressing a challenge that many regulators globally strive to tackle. Namely, the CDR “expands” personal data protections beyond the right to privacy.

It focuses on consumer protection and encourages competition between service providers, thus creating further market benefits through better prices and product innovation.

The CDR does not go as far as to create a data ownership right for individuals. But it empowers individuals to manage their data.

Given that data is the backbone of AI, the CDR will fundamentally shape Australia's contribution to the global regulatory efforts on AI. 

New Zealand reimagines regulation for the AI age 

‘Reimaging Regulation for the Age of AI' is a pilot project by the World Economic Forum (WEF) in partnership with the Government of New Zealand that aims at designing actionable governance frameworks for AI regulation.

The Government responded to the complicated endeavour of regulating AI with this project that adopts an evidence-based, methodological approach analysing existing regulatory tools and potential policy options.

The Government will pilot the suggested AI regulation frameworks to offer an understanding of what works and why. The openness of this project not only brings a global perspective to New Zealand but also provides an insightful and influential analysis to legislators globally.