Public And Private Sectors Differ In Their Approach To AI Governance

In an article published last year I highlighted how trust is crucial in our interactions with AI-based technologies in a huge range of spheres, from autonomous vehicles to healthcare.  It’s a topic that’s touched on in a new report from consultancy firm EY, which finds significant differences in how trust matters in AI are viewed between the public and private sectors.

A survey was conducted of 280 global organizations and 71 policymakers to understand the relative importance they placed on various ethical principles in relation to 12 different AI use cases.  The results reveal a clear divide between policymakers, who typically aligned around specific priorities, and the private sector, where no such consensus emerged.

“As AI transforms business and industries, poor alignment diminishes public trust in AI and slows the adoption of critical applications,” EY say. “For efforts to be fruitful, companies and policymakers need to be aligned. Coordination between both sets of stakeholders is critical to developing pragmatic policy and governance approaches that are informed by constraints and realities on the ground.”

Differing priorities

Among policymakers, there was a general agreement around the most important ethical principles for the appropriate development of AI-based applications.  For instance, for facial recognition, factors such as the avoidance of bias and fairness were commonly agreed upon.

Among the private sector, however, no real consensus emerged, with several options equally popular.  What’s more, these choices were generally driven by existing regulations, such as GDPR, rather than emerging issues, such as non-discrimination and fairness.

This divergence created understandable differences in how AI should be governed.  While there was agreement that a multi-stakeholder approach was required, there was disagreement on just what form this involvement should take.  For instance, the private sector respondents thought the private sector should take the lead, but just 6% of policymakers agreed.

Driving governance forward

EY believes that this disconnect presents significant challenges in terms of AI governance, plus clear market and regulatory risks for companies that are developing AI systems while these governance approaches are still being determined.

The report doesn’t pin the blame on one side or the other, but instead argues that each have blind spots when it comes to implementing AI that operates in a fair and ethical manner.  They urge greater collaboration between public and private sectors to overcome these challenges.

“As AI scales up in new applications, policymakers and companies must work together to mitigate new market and legal risks,” EY concludes. “Cross-collaboration will help these groups understand how emerging ethical principles will influence AI regulations and will aid policymakers in enacting decisions that are nuanced and realistic.”

Facebooktwitterredditpinterestlinkedinmail