A government charter covering the use of artificial intelligence by public departments may be a world first, but will it go far enough to protect citizens as AI is increasingly used in everything from policing to managing beneficiary applications?
ANALYSIS: The Government’s Algorithm Charter for Aotearoa New Zealand, unveiled by statistics minister James Shaw yesterday, is a slight document at just three pages in length.
It’s simplicity is to be welcomed – this is the equivalent of the AI principles many Big Tech companies have produced themselves in a bid to combat eroding trust in the AI systems that are integral to their business.
But the devil will be in the detail of how the 19 government departments that have signed up to the Charter will go about making sure they adhere to its “commitments”, which cover everything from transparency and human oversight of AI systems to privacy, ethics and human rights.
An AI risk matrix
The Charter includes a risk matrix that allows government departments to determine where to apply the charter. For instance, the higher the likely impact of an AI-driven decision is and the more probable it is likely to occur during standard operations, the more important it is to apply the charter.
The focus really is the algorithms that have a high risk of unintended consequences and a serious impact if “things do go wrong, particularly for vulnerable communities”. Important issues, such as Māori Data Sovereignty, notes the document, are “complex and require separate consideration”.
There’s no mention of an oversight body to police government use of AI and to impose penalties for breaches of the charter.
Separately, as part of Tech Week events, numerous discussions have focused on AI and the need for greater regulation of its use to build up the public’s trust in the technology.
When AI applications start impacting on people and people’s rights, especially when they are used in fields like healthcare and employment, I think the government owes a duty of care to its people to ensure that those applications have the proper oversight and are regulated in a way that the benefits of those applications are distributed equitably,” said Tech Week panelist, Anchali Anandanayagam, a partner at Hudson Gavin Martin, an Auckland-based boutique corporate and commercial law firm specialising in technology, media and intellectual property.
While provisions of the Piracy Act covered all organisations, public and private, that used citizens’ data, AI was such a powerful area of technology that a specific focus was warranted, added Anandanayagam.
“We need to have global conversations about how other countries are going to be regulating this technology.”
Getting AI governance right
Speaking on the same panel, Matt Ensor, project director of Beca.AI, the artificial intelligence division of engineering firm Beca, said the company had been trialling use of AI in public consultation it undertakes ahead of construction projects.
“The stories around AI that are really resonating are those that are doing things for the public good, not so much about the efficiency of getting things done,” he said.
Beca was currently advertising six roles related to AI and natural language processing with plans to recruit a further six AI specialists. Beca’s business model relied on developing AI applications that could be applied to hundreds of customers.
As such, it was crucial for Beca to get its governance of AI right.
“No one enters a firm like Beca having been trained in AI ethics,” said Ensor.
“We are trying to get people to understand that they are all responsible. It’s a bit like health and safety, we are kind of on a journey.”
Steve O’Donnell, IBM a managing partner of the company’s Global Business Services, said tackling bias in AI systems was a key priority for tech companies developing applications that could have public sector uses.
“It comes back to not letting bias into the AI, who it is trained by, who it is developed by,” he said.
“If you have young, male caucasian developers building the AI and training the AI, you will get some of those biases coming through.”
More “diversity of thought” was entering the tech sector, he said, but more work was needed on company hiring processes to ensure the humans working on AI were not likely to introduce biases that could lead to flawed decision making.
O’Donnell said areas of AI, such as using the technology to operate self-driving cars, would have to be regulated by governments.
For Anandanayagam, a more comprehensive regulatory approach is required to sit above the “self-imposed principles” that tech companies like IBM, Microsoft, Google, Apple, Amazon have introduced.
She told Tech Week viewers: “More important is putting regulation in place that gives our businesses and our citizens certainty about how this technology can be used and what we think it might be capable of doing in the future.
The Government’s algorithm charter is being described as an evolving document with additional government agencies expected to join the initial group of 19 signees.