The Risks Of AI-Based Tools In Job Centers

As AI has evolved in recent years, it has grown more and more capable of making predictions and judgments.  Might job centers also begin to deploy such algorithmic aids?  New research from the University of Copenhagen outlines some of the problems with such an approach, not least of which the reductive caricatures they often reduce people to in order to make their decisions.

“You have to understand that people are human. We get older, become ill and experience tragedies and triumphs. So instead of trying to predict risks for individuals, we ought to look at implementing improved and more transparent courses in the job center arena,” the researchers explain.

Alternative approaches

The researchers put their heads together to construct a number of alternative approaches to try and understand the job-readiness of unemployed people that don’t require the use of potentially ethically compromised AI-based systems.

“We studied how to develop algorithms in an ethical and responsible manner, where the goals determined for the algorithm make sense to job consultants as well,” the researchers explain. “Here, it is crucial to find a balance, where the unemployed individual’s current situation is assessed by a job consultant, while at the same time, one learns from similar trajectories using an algorithm.”

They highlight how the Danish Agency for Labour Market and Recruitment have recently rolled out a job search algorithm to predict the risk of long-term unemployment among citizens.  It’s a rollout that has attracted significant criticism from data law experts.

“Algorithms used in the public sphere must not harm citizens, obviously. By challenging the scenario and the very assumption that the goal of an unemployed person at a job centre is always to land a job, we are better equipped to understand ethical challenges. Unemployment can have many causes. Thus, the study shows that a quick clarification of time frames for the most vulnerable citizens may be a better goal. By doing so, we can avoid the deployment of algorithms that do great harm,” the researchers explain.

Affecting judgments

The researchers surveyed a number of real-life job consultants, who universally expressed concern that algorithms would negatively affect their judgment, with specific concerns about the impact on their most vulnerable clients.

The researchers argue that more work should be done to ensure that the job consultants have more of a say in the way in which the algorithm is developed.

“Accomplishing this is difficult and will take time, but is crucial for the outcome. At the same time, it should be kept in mind that algorithms which help make decisions can greatly alter the work of job consultants. Thus, an ethical approach involves considering their advice,” they explain.

Striking a balance

While the researchers accept that algorithms can perform useful duties in job center environments, they urge developers to consider all aspects of their development.  They have particular concerns about the ethical aspects.

For instance, they highlight that job center conversations are often highly sensitive, with individuals sharing details of illness or divorce or other personal circumstances that contribute to their situation. While algorithms may be able to sensitively process and use that information, there are concerns about the appropriate use of the technologies available to us.

“What will we do with this information, and can it be deployed in a sensible way to make better decisions? Job consultants are often able to assess for themselves whether a person is likely to be unemployed for an extended period of time,” the researchers conclude. “These assessments are shaped by in-person meetings, professionalism and experience — and it is here, within these meetings, that an ethical development of new systems for the public can best be spawned.”

Facebooktwitterredditpinterestlinkedinmail