The potential impact of AI on work and employment is hotly debated. Will annual performance reviews be carried out by a chatbot? Will automating some aspects of jobs allow people to carry out more creative and flexible gig work? And how will these changes affect workers’ rights?
Experts may not have all the answers but our research project aims to identify ‘weak signals’ of potential change that can help identify likely scenarios for critical policy focus.
Forecasting the future impacts of AI at work
In March 2023, a team of researchers from the University of Cambridge, UK (Centre for Business Research, Faculty of Law, and Computer Laboratory), and Hitotsubashi University, Japan, got together to explore ways in which artificial intelligence (AI) is changing the future of work—and assess whether our current political, economic, and legal institutions are prepared for what’s coming.
Thirty two experts ranging from human resources (HR) professionals and lawyers to trade unionists and academics were divided into three break-out groups. The idea was to brainstorm future scenarios based on a dataset of summaries of around 100 media opinions disseminated before the workshop. The dataset included news, blog posts, and op-eds published in English across the world in the last 4 years and were curated by google search for recent writing on AI and work.
This exercise is called ‘horizon scanning’. It is a forecasting methodology, which, along with other forecasting tools, can be used to identify and map areas for critical policy focus. In other words, based on the opinions of experts in the room, the idea is to detect early ‘weak signals’ that could later become indicators of potential change.
The dataset was meant to provoke thought and facilitate future scenario-building. It included summaries of articles on themes ranging from the rise of worker surveillance to potential replacement of humans by robots, from AI-based hiring systems to protest in platform work, from artist algorithms to racist algorithms, and so on.
Three possible problems for policymakers
We focused on three key areas of potential impact: HR performance assessments; protections for freelance workers in the gig economy; and the resolution of workplace disputes.
1 Will AI replace the humane in HR assessments?
The first future scenario is that an increase in AI-mediated interventions in HR performance assessments might lead to ‘humans losing voice’. AI-driven assessments might be more objective, arguably fair, and potentially even cost-effective. However, too much trust in these assessments could be dangerous for a variety of reasons. First, bias would need to be removed from design as well as from implementation. Second, there will always be inherent issues with design because AI-driven assessment, by using quantitative metrics to measure success, will miss out on crucial qualitative information. Third, even in collaborative human-AI settings, human assessors could eventually develop tendencies to defer to the seemingly more objective, metric-based algorithmic rankings, thereby rendering futile the objective of having ‘a human in the loop’.
Besides the need for targeted problem-identification, one potential solution is to address the specification problem: in other words, ensuring that AI assessments capture what really matters. Making AI assessments more inclusive will also require a careful consideration of neurodivergence, disability, and cultural differences across the workforce. A more efficient implementation of AI would arguably require repurposing the human element in decision-making to reprioritise qualitative values.
2 Will AI lead to a further boom in gig work?
The second group discussed the possibility of ‘automation freeing up workers’ time to take on freelance jobs’. This group predicted a potential boom in creative industries like gaming, music, and graphic design. Flexibility to explore creative ideas could possibly allow for income generation outside of traditional working structures. While this might make it easier to navigate accessibility challenges of traditional workplaces, an increase in freelance work could also mean more exploitation. Employers’ knowledge of engagement in freelance work might give them more market power, enabling them to depress wages for example, even in cases where there is negligible impact of AI on outputs.
A potential solution would be to ensure stronger protection of rights, both within and beyond the traditional employer-employee relationship. There will also be a need to place strict limits on worker surveillance (it is hard to envisage circumstances in which secret monitoring is acceptable, other than in clear cases of suspected fraud) and to shift focus from inputs to outputs in assessments.
3 Will AI freeze worker voice in disputes?
The third group brainstormed the possibility of ‘increased automatic adjudication of workplace disputes, with an increase in the use of predictive AI’. It was forecast that AI could improve the adjudication process by cutting costs, streamlining disputes, and improving access to justice. This could largely improve workplace relations in the long term, but the process is not without huge challenges.
Automated adjudication could also lead to exclusion of worker voice by either lack of representation or the lack of information. Unilateral control of the technology deployed to streamline disputes for example, could lead to information asymmetries, further deepening the power gaps between management and workers. This risks a loss of democratic control over the dispute resolution process, in a context where the system already favours employer, leaving workers vulnerable to an ‘algorithmic black box’.
Steering the future of AI-driven work in the right direction
Our next step is to evaluate the predictive value of specific scenarios chalked out by the workshop participants. The progress so far has raised critical questions for stakeholders to consider in the next 10 years. For example, how is the AI being deployed for HR assessments designed and how can we make the participation of ‘human in the loop’ more meaningful? What regulatory frameworks should we be thinking about for a future that defines ‘work’ by workspaces (i.e. by relationships) rather than by physical walls of workplaces? And finally, how can we ensure democratic, transparent, and mutually respectful dispute resolution processes that go beyond paying lip-service to the protection of worker rights?
In the longer term, it is important to think about how AI will shift power in the workplace and whether these interventions are helping dismantle or reinforce existing power structures of oppression and marginalisation. Our ongoing UKRI-funded study will provide concrete considerations for tackling some of these short-term and longer-term questions with the hope to steer the future of AI-driven work in the right direction.