Are AI systems team players?

Marko Balabanovic
Digital Catapult
Published in
10 min readAug 5, 2019

--

Robot teams face off in the 2015 Robocup finals, held in Hefei, China, July 22, 2015. Jianan Yu / Reuters from Al Jazeera

This post also published by Business User magazine in Germany: Sind KI-Systeme Team-Player?

We’re overrun with predictions about AI systems taking people’s jobs, expressed in numbers and percentages and trillions of dollars. But all the jobs I’ve ever had have always been in organisations, in teams, with colleagues and structures, social dynamics and office politics. The question we should be asking is: what role will AI systems take in organisations? Will they be managers? Or co-workers? Or just better tools we can use?

AI as a manager

If you think you’ve never met someone whose manager is an AI system, think again. Every Uber driver’s work is allocated by an algorithm; they have just 15 seconds to accept or reject a ride request, without knowing the destination or fare. Their performance is reviewed automatically, their pay is determined by the system, and if their ratings drop too low that same system will deny them work. Morale boosting motivation comes from an app, and their recourse to help is a customer service agent in a faraway country rather than a human manager. AI recruitment systems are screening candidates. In fact, in organisations around the world, AI systems are performing every traditional management function already¹. But today’s AI managers are creating dehumanised systems where workers are treated without respect or dignity. 90% of Amazon’s Madrid logistics staff walked off the job during Black Friday in 2018, there were protests at 5 sites across the UK, and in Staten Island workers are trying to be the first to unionise among US Amazon staff. AI-managed staff protest their working conditions with banners that read “We are not robots”. Ironically it is their managers who are robots.

Constant surveillance, gamified incentives and micro-management of tasks would not be considered positive traits in a modern manager, but this is how today’s automated AI systems behave in managerial roles. We clearly have some work to do² before we can make a success of AI management.

Humans protesting AI management (GMBunion@Amazon/Twitter)

AI as a tool

The simplest way to deploy an AI system in an organisation is as a tool, augmenting human work. The Gmail system that completes your sentences as you type is a clever AI tool. It helps you as you work, it can take the initiative and make suggestions as you’re working, but it does not act autonomously. It won’t automatically reply to all your unread messages.

Mixed initiative AI tool for composition, Microsoft “Clippy” 1997
Mixed initiative AI tool for composition, Gmail 2018 (From Google I/O conference 2018, thanks to The Verge)

Nevertheless better AI tools in organisations will have a massive impact. They’ll help people be more efficient, or even create new jobs. Or will they?

“I think that if you work as a radiologist you are like Wile E. Coyote in the cartoon,” Hinton told me. “You’re already over the edge of the cliff, but you haven’t yet looked down. There’s no ground underneath.” Deep-learning systems for breast and heart imaging have already been developed commercially. “It’s just completely obvious that in five years deep learning is going to do better than radiologists,” he went on. “It might be ten years. I said this at a hospital. It did not go down too well.”

From A.I. versus M.D., Siddhartha Mukherjee interviewing top AI researcher Geoff Hinton, New Yorker, April 2017.

You’ll commonly hear predictions that AI tools will replace specific professions such as radiology. The opposing view is that the development of a better tool only makes the radiologists better at their jobs, just as better and more accurate scanning technologies have. Indeed the field of radiology itself has grown up, since the first medical X-ray image in 1896, alongside the technological developments of new kinds of imaging such as MRI, ultrasound, PET and CT. Doctors have constantly designed and made use of better tools and they will continue to do so. AI tools will soon be just another part of their arsenal.

Another example comes from the field of design. “Generative” machine learning techniques generate multiple novel designs given examples to learn from and given some constraints. Early stage demonstrations are helping designers with everything from shoes to fonts, and manufacturing companies are already creating improved products like the seatbelt bracket below. Just as photographers and illustrators have adopted tools like Photoshop, we’d expect designers of all kinds to use generative AI to speed up their workflow or improve their designs.

New seat belt bracket designed by human designers working with improved AI tools, replacing an eight-part assembly with a single part that is 40 percent lighter and 20 percent stronger (GM working with Autodesk’s AI generative design tools)

I believe that tools and jobs will evolve together, as they always have. With cleverer AI tools we’ll get new kinds of designers and artists, lawyers and accountants, writers and editors, engineers and architects³.

AI as a co-worker

So if we’re worried about AI managers, and excited about AI tools, what about AI systems as our co-workers? For an AI system to be worthy of the title “co-worker”, it must exhibit autonomy and take ownership. We must be able to give it some control.

An AI room thermostat has control and autonomy, but its job is surely too minor for it to qualify as a co-worker. Robot vacuum cleaner? Still not there. Self-driving tractor, working alongside a farmer? Automated trading system? Now we’re talking. But the idea of an autonomous automated helper isn’t new. We’ve had aeroplane autopilots for 100 years, and they can teach us valuable lessons about how to cope with AI co-workers in our organisations. The problem with autopilots is that they’re too good. Pilots rely on them for most of a flight. So when an autopilot fails, and human pilots have to take control suddenly, accidents happen. In the Asiana Airlines crash of 2013, where the autopilot-controlled plane was coming in to land too slowly, none of the four on-board pilots noticed. Pilots couldn’t take over control fast enough in the case of the recent Boeing 737 Max crashes. A co-worker who gives up suddenly when there’s an emergency is not a safe team member.

The designers of autonomous cars have thought about this and defined five levels of autonomy. Right now we’re mostly at level 2 with features like lane following or automated parking. At level 4 we’ll have full autonomy but with limits: within a well mapped area, limited traffic, fine weather. Level 3 says the driver must be ready to take control of the vehicle at any time. The question is: can level 3 ever be safe, if a car needs to be able to delegate control back to the driver at short notice? After all, humans are slow, inattentive, easily distracted and hardly make the most reliable backup system in the case of emergencies, as we know from the lessons of autopilots. Most car makers are hoping to skip level 3.

Tesla car handing control back to driver (at 72mph) via Teslarati

The issue of handover of control isn’t just for cars and planes. It will happen in every domain. Mental health chatbots will fall back on a human therapist in difficult situations, handing over conversational control. AI medical diagnosis systems performing triage are acting independently as autonomous co-workers, transferring control to different healthcare workers and pathways.

Some jurisdictions are insisting on transparency over where this control lies — California now requires AI systems to identify themselves as ”not a natural person”. Google’s very impressive Duplex AI assistant, that can make phone calls to book appointments on behalf of users, will from now on identify itself as a computer system, after accusations of deceitful behaviour. We’ll need clarity on ownership and liability. They will be responsible for work, but will they ever be accountable?

“Do you realize,” Ng told Darcy, “that Woebot spoke to more people today than a human therapist could in a lifetime?” Andrew Ng (well known AI researcher) talking to Alison Darcy (creator of Woebot and clinical psychologist), as quoted in “May A.I. Help You?”, New York Times, November 2018.

We shouldn’t assume that an AI co-worker will be a single entity like a human team-mate. Not only can there be as many of them as needed, working 24/7 in parallel, but they can all learn as one. As Elon Musk said of the Tesla fleet: “When one car learns something, they all learn it.” Woebot is a therapy chatbot designed to help people suffering from depression or anxiety. In its first week, it talked to 500,000 people, with the opportunity to learn from more interactions than a human therapist ever could. There will be powerful economic incentives that drive deployment of such systems into organisations. In order to make those deployments successful, we must deal with the issues of control and autonomy.

AI as a team player

Today’s debates about AI and jobs are too simplistic. There won’t be many entire professions that disappear. The automation of our work within organisations will be complex, messy and full of unintended consequences. We’re seeing those already.

The risks in employing an autonomous AI co-worker! (from Walt Disney’s Fantasia)

I’ve tried to frame the problems in a different way by considering how we will interact with AI systems in organisations. Using AI tools opens up new markets and new professions. AI systems to manage human work are rife with ethical concerns and pose the greatest risks. If we can create effective new patterns of interaction for the handover of control and trust between people and machines, then AI systems running autonomously as co-workers are the most intriguing area.

Ultimately these AI systems will succeed in organisations if they can be good team players, collaborating with their human co-workers. Initially our AI systems will be opinionated co-workers who think they’re always right, who take things very literally, who occasionally go rogue like the magic broomsticks with a life of their own in Disney’s Fantasia. Constructs like the five levels of autonomy will inform design discussions, and over time good interaction patterns will emerge. For AI systems to become team players, we will need a new discipline of organisational AI design.

Footnotes

¹ What does a manager actually do? The usual definition, from Harold Koontz and Cyril O’Donnell back in 1955, includes planning (setting objectives), organising (developing a structure, allocating resources), staffing (selection, appraisal, development), directing (influencing, guiding, supervising) and controlling (monitoring progress). There are AI systems in the market for each of these activities. Anaplan use machine learning to suggest new business plans for customers like Del Monte, allowing them to quickly re-plan in the face of big changes like El Nino. Kronos software used by large companies like Starbucks automatically allocates shifts to workers to minimise cost but at the expense of workers being able to predict work and arrange childcare. A market exists for AI systems that select candidates, with companies such as Pymetrics, Entelo and HiredScore. Unilever uses video interviewing with AI facial and behaviour analysis to screen candidates. New Google spinout Humu analyses thousands of employee data inputs to generate “nudge” messages to prompt more effective behaviours. Percolata, a provider of algorithmic management software for retail, ranks employees according to shopper yield, and profiles each employee’s performance. Deliveroo’s algorithm automatically compares delivery times to predicted delivery times in order to assess couriers. AI companies are targeting every aspect of managerial behaviour.

² Frederick Winslow Taylor developed the concept of “scientific management” in the late 1800s. By timing and recording all of the activities in a factory, his method attempted to maximise productivity by “the establishment of many rules, laws and formulae which replace the judgment of the individual workman”. The ideas of “Taylorism” are still there but we have come a long way since then: a modern factory expects workers to suggest and implement improvements themselves rather than blindly following instructions. Algorithmic management is a more recent evolution, and indeed has been termed “digital Taylorism”. The social and legal constructs around it will take time to coalesce. The UNI Global Union (representing 20M workers globally) has published its principles for workers’ data privacy and protection, opening a stronger debate about workplace surveillance and how the data should be used. The Fairwork Foundation is a project to “certify online labour platforms, using leverage from workers, consumers, and platforms to improve the welfare and job quality of digital workers”. Some claim that future automated systems could remove bias, reduce discrimination and optimise for fairer and more humane work schedules.

³ Just like radiography has grown up as a discipline alongside the development of new tools such as MRI, so we can see that new creative industries have emerged alongside the tools that enable them. AI is just the latest capability to take advantage of. Musicians have embraced developments like electric guitars (1932), tape recording (1935), the origins of synthesisers with the Hammond organ (1938). The music industry today relies on technology and tools from creation to production and distribution. Genres such as electronic dance music are £5B industries. Spotify has around a £5B revenue, and relies on AI tools to recommend personalised weekly playlists for 190M users. The animation, special effects and games market is estimated to be over £200B, largely relying on tools that have evolved steadily since the 1970s, and are now incorporating AI techniques. We are in the very early stages of augmented reality adoption, where we can see the same pattern: a new set of AI-enabled technological capabilities and authoring tools are enabling the birth of a new creative industry.

Coda

Anyone who has worked in AI for a while, at some point early on in reading this, will say “Hang on a minute! You’re not just talking about AI systems, you’re talking about any sort of computer automation!”. Quite correct. The implications on our lives of these systems don’t really depend on whether they’re technically running algorithms a computer scientist would class as AI or machine learning, even if such distinctions were universally agreed. But it is undeniable that the power of AI has thrown these problems into sharp relief over the last few years. I believe this is really a debate about automation not about AI, and I am unashamedly using the “AI” label to attract attention. I would love to hear thoughts on whether I have digressed!

Thanks

Thanks to Kevin Marks, Peter Bloomfield and others at Digital Catapult for suggestions. Earlier versions of this article were presented as part of talks at Digital Catapult and the King’s Fund in early 2019.

--

--