"I鈥檓 sorry, Dave. I鈥檓 afraid I can鈥檛 do that.鈥 HAL鈥檚 cold, if polite, refusal to open the pod bay doors in聽聽has become a defining warning about putting too much trust in artificial intelligence, particularly if you work in space.
In the movies, when a machine decides to be the boss 鈥 or humans let it 鈥 things go wrong. Yet despite myriad dystopian warnings, control by machines is fast becoming our reality.
Algorithms 鈥 sets of instructions to solve a problem or complete a task 鈥 now drive everything from browser search results to聽.
Algorithms do what their code tells them to do. The problem is this code is rarely available.
They are helping聽. They are聽聽on financial markets, making and losing fortunes in micro-seconds. They are calculating the most efficient routes for聽.
In the workplace, self-learning algorithmic computer systems are being introduced by companies to assist in areas such as hiring, setting tasks, measuring productivity, evaluating performance and even terminating employment: 鈥淚鈥檓 sorry, Dave. I鈥檓 afraid you are being made redundant.鈥
Giving self鈥恖earning algorithms the responsibility to make and execute decisions affecting workers is called 鈥溾. It carries a host of risks in depersonalising management systems and entrenching pre-existing biases.
At an even deeper level, perhaps, algorithmic management entrenches a power imbalance between management and worker. Algorithms are closely guarded secrets. Their decision-making processes are hidden. It鈥檚 a black-box: perhaps you have some understanding of the data that went in, and you see the result that comes out, but you have no idea of what goes on in between.
Here are a few examples of algorithms already at work.
At Amazon鈥檚 fulfilment centre in south-east Melbourne, they set the pace for 鈥減ickers鈥, who have timers on their scanners showing how long they have聽. As soon as they scan that item, the timer resets for the next. All at a 鈥渘ot quite walking, not quite running鈥 speed.
Or how about AI determining your success in a job interview? More than 700 companies聽. US developer HireVue says its software speeds up the hiring process by 90% by having applicants answer identical questions and then scoring them according to language, tone and facial expressions.
Granted, human assessments during job interviews are notoriously flawed. Algorithms,however, can also be聽. The classic example is the COMPAS software used by US judges, probation and parole officers to rate a person鈥檚 risk of reoffending. In 2016 a聽聽showed the algorithm was heavily discriminatory, incorrectly classifying black subjects as higher risk 45% of the time, compared with 23% for white subjects.
Algorithms do what their code tells them to do. The problem is this code is rarely available. This makes them difficult to scrutinise, or even understand.
Nowhere is this more evident than in the gig economy. Uber, Lyft, Deliveroo and other platforms could not exist without algorithms聽听飞辞谤办.
Over the past year Uber Eats鈥櫬犔共曰逄, for instance, have blamed unexplained changes to the algorithm for slashing their jobs, and incomes.
Rider鈥檚 can鈥檛 be 100% sure it was all down to the algorithm. But that鈥檚 part of the problem. The fact those who depend on the algorithm don鈥檛 know one way or the other has a powerful influence on them.
This is a key result from our聽. Most knew their jobs were allocated by an algorithm (via an app). They knew the app collected data. What they didn鈥檛 know was how data was used to award them work.
In response, they developed a range of strategies (or guessed how) to 鈥渨in鈥 more jobs, such as accepting gigs as quickly as possible and waiting in 鈥渕agic鈥 locations. Ironically, these attempts to please the algorithm often meant losing the very flexibility that was one the attractions of gig work.
The information asymmetry created by algorithmic management has two profound effects. First, it threatens to entrench systemic biases, the type of discrimination hidden within the COMPAS algorithm for years. Second, it compounds the聽聽between management and worker.
Our data also confirmed others鈥 findings that it is almost impossible to complain about the decisions of the algorithm. Workers often do not know the exact basis of those decisions, and there鈥檚 no one to complain to anyway. When Uber Eats bicycle couriers asked for reasons about their plummeting income, for example, responses from the company advised them 鈥渨e have聽聽over how many deliveries you receive鈥.
When algorithmic management operates as a 鈥渂lack box鈥 one of the consequences is that it is can become an聽. Thus far under-appreciated by Australian regulators, this control mechanism has enabled platforms to mobilise a reliable and scalable workforce while avoiding聽.
鈥淭he absence of concrete evidence about how the algorithms operate鈥, the Victorian government鈥檚聽聽notes in its report, 鈥渕akes it hard for a driver or rider to complain if they feel disadvantaged by one.鈥
The report, published in June, also found: it is 鈥渉ard to confirm if concern over algorithm transparency is real.鈥
But it is precisely the fact it is hard to confirm that鈥檚 the problem. How can we start to even identify, let alone resolve, issues like algorithmic management?
Fair conduct standards to ensure transparency and accountability are a start. One example is the聽, led by the聽. The initiative is bringing together researchers with platforms, workers, unions and regulators to develop global principles for work in the platform economy. This includes 鈥渇air management鈥, which focuses on how transparent the results and outcomes of algorithms are for workers.
Understandings about impact of algorithms on all forms of work is still in its infancy. It demands greater scrutiny and research. Without human oversight based on agreed principles we risk inviting HAL into our workplaces.
This article was originally published by The Conversation as ''.听
It was co-authored by Dr Alex Veen from the University of Sydney Business School's Discipline of Work & Organisational Studies, from Edith Cowan University and from the University of Western Australia.聽