Who Should Watch Over AI?

As AI has become more potent, there have been understandable concerns about how the technology can be policed so that it works for the benefit of all society rather than certain sections of it.  A new paper from Lancaster University explores how AI can be effectively policed, and indeed who should do the policing.

The authors argue that it isn’t necessary for the public to understand how AI works in order for them to trust it, but they do need to know that someone with the knowledge to examine and understand how it’s operating has the authority to oversee it and sanction its developers if it begins to cause harm.

“I’m certain that the public are incapable of determining the trustworthiness of individual AIs… but we don’t need them to do this. It’s not their responsibility to keep AI honest,” the researchers explain.

Trusting AI

The authors highlight that the public often mistrust AI, with this often stemming both from the popular portrayal of the technology in recent years and the knowledge that there is a general lack of meaningful oversight.

They argue that while attempts to increase transparency should not be discounted, they do little to address public concerns because the public lacks the knowledge to make sense of such information.  Instead, a regulatory ecosystem is the only feasible way forward.

“The public do not routinely concern themselves with the trustworthiness of food, aviation, and pharmaceuticals because they trust there is a system which regulates these things and punishes any breach of safety protocols,” they explain.

“Rather than asking that the public gain skills to make informed decisions about which AIs are worthy of their trust, the public needs the same guarantees that any AI they might encounter is not going to cause them harm.”

Playing a part

Of course, that’s not to say that documentation and transparency don’t play a part, and they will be crucial elements of the regulatory ecosystem.

However, while documentation can provide valuable information for internal audits and external regulation, it does far less in helping to foster trust among the general public.

“If we fail to recognise that the burden to oversee trustworthiness of AI must lie with highly skilled regulators, then there’s a good chance that the future of AI documentation is yet another terms and conditions-style consent mechanism — something no one really reads or understands,” the researchers say.

Facebooktwitterredditpinterestlinkedinmail