When hype is harmful: why what we think is possible matters

Alix
The Startup
Published in
8 min readSep 18, 2019

--

To decide what technology we want in our systems and structures, we need clear-eyed conversations about what is possible. Here, I look at how inconsistent — and unrealistic — our ideas of ‘what is possible’ can be, who benefits from overblown assumptions of ‘groundbreaking’ tech, the challenges this brings and what we can do about it.

Hype and how it shapes debate

Sometime last year, a colleague working at a foundation funding LBGTQI+ issues was panicked. A news report said that machine vision AI could determine a person’s sexuality. What would this mean for the marginalised groups she supported around the world? What could be done?

The news report was clearly guilty of something common. It was likely a) simplifying a complex technical capacity for a sound bite, b) uncritically regurgitating an overstated press release from a company or researcher, and/or c) prioritising breaking news about a tech ‘innovation’ without meaningfully assessing how it could affect society.

Over the past few years, I’ve spoken with data scientists and engineers about the trends in their field and how they relate to ethical futures. Most were nonplussed and sceptical about the claims being made and the concerns being raised. At the time, I asked myself how the architects of our future could be so unwilling to engage with the social dimensions of their craft. While I still find it frustrating when data scientists refuse to engage on these topics, I understand why they push back. They know that their field is being hyped in a way that makes it difficult to have reasonable discussions about it.

We get excited by a technical breakthrough because it means that society may have a new technical capacity. A new possibility that can help us build a better — or at least different — world. But for society to effectively incorporate new technical capacities, we must evaluate them. And yet, it can be very difficult to accurately weigh up what a new technical capacity means, what level of confidence and control we have over it, and what it can do for (and to) different parts of society.

Hype can lead to:

  • ‘Breakthroughs’ being oversimplified
  • A failure to consider context and people when evaluating appropriateness or effectiveness of a technology
  • Uncertainty about what technology can and can’t do
  • Short-circuiting societal debate about the implications of new technology

And yet, the habit of hype is everywhere.

How can society have meaningful conversations about how we want to embed technology in our systems and structures if we can’t have a clear-eyed conversation about what is possible? In other words, how do we ask the ‘should we’ questions if we can’t even answer the ‘can we’ questions?

This post is about the dynamics of our collectively different perceptions of technical capacity.

  1. Why our perceptions of a capacity can be so divergent and divorced from actual capacity;
  2. Who benefits when we perceive a technical capacity as more groundbreaking and generally applicable than it is;
  3. What challenges this presents; and
  4. What do we do about it?

Technical capacity: why perception is often divorced from reality

Technical advances are raising hugely important social questions like: what power should the state have as technical capacity grows? How will industry grow over the next few decades, and to the benefit of whom? Who should drive the decisions about how to design, scale, and regulate new emerging technologies and capacities?

These are all critical debates — and they are very disorienting, for several reasons:

  • The pace of technical change is fast and growing
  • Technical fields are becoming more specialised by the day
  • Society is looking for solutions to huge problems, and technology seems a promising way to leapfrog entrenched issues
  • Capacity manifests differently in different conditions, contexts, and cultures
  • Capital is practically free to borrow, so the bets on new technologies are growing in size and risk (as is the incentive to inflate new developments)
  • There is considerable secrecy around intellectual property — or secrecy to hide a lack of capacity. (It’s unclear which)

Different sectors and groups end up relying on their perceptions, expertise, incentives, and biases to determine what is and isn’t possible. The same ‘breakthrough’ may be perceived as ‘snake oil’, or ‘a panacea’, or anywhere in between.

This divergence between how individuals, institutions, communities, sectors, and industries perceive a particular technical capacity has a huge effect on what we think it should be used for in the wild. And when there is a big difference between the broad perception of technical capacity and actual technical capacity, society will struggle with the already complex navigation of what Shannon Vallor calls ‘techno-social opacity’.

Who benefits when perception of capacity is higher than actual capacity?

A piece in GIZMODO by Byan Menegus, published in August 2019

Certain parts of society benefit disproportionately when technical capacity is broadly perceived as high, regardless of whether that perception is accurate.

Investors in early stages of AI companies benefit financially when other investors see that company as having made critical breakthroughs that will transform industry. Police forces benefit when populations think that the next, great, technical tool is going to increase the competence of those who want to keep us safe. Municipal policymakers benefit when technical developments can make them appear to be governing in innovative ways. Social media companies benefit when users believe that AI will be able to handle toxic information environments and content moderation at scale. Technology companies benefit — in terms of their reputation and how they are regulated — when their inventions are seen as gifts from the gods.

It is no accident that hype-driven assessments of new technical capacities abound. Most people with power are incentivised to gloss over questions about what emerging technology can really do. And some of those that benefit from hype will actively encourage society to overestimate what a technology can do, so they can short-circuit conversations about what technology should do. (See for example: in emergency response, emotion-detecting AI, criminal justice risk assessments, and the wacky world of techno-utopian entrepreneurship.)

What challenges does this present?

Why does it matter when the perception of a technology’s capacity diverges from its actual capacity?

  • When we are confused as to whether Cambridge Analytica is overinflating its capacities, we add a dimension to the story that complicates how we respond — because we don’t know how much of it is a real threat and how much of it is bluster.
  • When we don’t know whether autonomous vehicles can drive properly, regulators don’t know whether to issue licences for them to operate. We are encouraged to focus on the future in which they do, not the present in which they can’t.
  • When we are unsure if AI companies can predict health outcomes, we question whether health services like the NHS should hand over more data, more liberally, to improve those health outcomes. And we question how much we should invest in technical advances versus more obvious needs.

All this uncertainty has an impact on trust. Who can we trust to communicate a technical breakthrough? Can we trust the company that profits off of us believing it? Can we trust those who may use the breakthrough to convince us that they are technically savvy? As technology advances, we will increasingly need reliable sources of information and confirmation about technical possibility. And yet, the environment that we are in suggests those who are positioned to provide us with that information will have more and more incentive to lie to us.

What do we do about it?

All is not lost. We can all be thinking more like scientists (including social scientists). When we hear about a breakthrough, we should ask questions about methods, incentives and findings. When it looks too good to be true, we should poke and prod and look for how inventors explain what is possible. If a claim is big, the evidence put forth should also be big. When a breakthrough is heralded as changing society, we should ask how, for whom, and at the cost of whom. And we should expect them to communicate clearly about what their tech cannot do, and in what circumstances it should not be used.

As procurers of systems of technology (as states, companies, and communities), we should expect to understand how a system works. We should understand things like false-positive rates, expect that the time between a breakthrough and the reasonable incorporation of a new technology into socio-critical infrastructure will be long, and expect to know how things work — well enough to take responsibility for when they don’t. The relationships between governments and individuals are about more than simply ever-more efficient service delivery. They should be about care, protection, responsibility and accountability.

Trust is like a mountain. It takes thousands of steps to gain and a moment fall and lose everything. And because technical capacity requires deep (and increasingly specialised) expertise, the trust that inventors must ask of society will only increase over time. Being intentionally clear about capacities matters — even if it doesn’t seem to matter early on, when investor capital is easy to come by and the halo-effect is still flickering.

Being trustworthy isn’t just about doing what you say, it’s about being truthful about what you can reasonably do. Markets, societies, communities, and the future depends on it.

A piece in The Verge by James Vincent, published in July 2019

Questions we should frequently ask ourselves

I have been working on a set of questions that we should ask ourselves frequently when we hear about technical ‘breakthroughs’, from ‘Who benefits from the world believing a technical breakthrough is real?’, to ‘What laws exist to protect those potentially affected by the technology if it is launched?’

I’ve set these 12 questions out in a separate post, and would love to hear people’s thoughts on them in the comments.

Thanks to Anna Scott for her help editing this piece. You can find out about her freelance services at anna-scott.com.

--

--

Alix
The Startup

Intentional technology @ Computer Says Maybe, co-founder @engnroom