LibrarianShipwreck

"More than machinery, we need humanity."

“Striving to minimize technical and reputational risks” – Ethical OS and Silicon Valley’s guilty conscience

Considering how proudly they declare that they are designing the future, technology companies seem almost comically bad when it comes to anticipating the consequences of the things they create. While the shiny ad copy churned out to prop up these firms is filled with claims about connecting the world, increasing access to information, and providing workers with new sorts of flexibility, little attention is generally given to concerns about potential downsides. Indeed, as the adoring tech press and eager fans bask in the positive glow of these promises, those critics who broach the topic of potential risks are generally ridiculed as technophobes, derided as Luddites, or are accused of wanting everyone to go live in caves. But then a tragi-comic thing happens on the way to Silicon Valley’s fully-automated high-tech utopia – these tools are used to undermine democracies, an entire society suddenly finds its essential infrastructure hackable, access to information leads to an explosion of disinformation, workers find their livelihoods made increasingly precarious, and the list goes on.

A recurring character in the drama of recent months is the exhausted looking tech company CEO dragged before the cameras to glumly confess that they have messed up.

It is becoming increasingly clear that the problem with subscribing to the ideology of “move fast and break things” is that there are some things that are better left unbroken, and that if one is moving too fast one might not be able to stop before plunging over the edge of a cliff. In this situation the tech behemoths are finding themselves in an uncomfortable place: an increasingly angry public, governments that seem prepared to put serious regulations in place, and even some tech company employees who are beginning to push back against the companies that pay them.

What, then, is a tech company to do?

Ethical OS thinks that it is the solution.

And just as one should be skeptical when tech companies boast that they will invent the cure for all that ails the world, so too should one be skeptical of the tech company adjacent groups claiming that they can fix all that ails the tech companies.

Created with the backing of the Institute for the Future and the Omidyar Network, Ethical OS is less of an organization and more of a heuristic, at least insofar as the “Ethical OS Toolkit” is its most noteworthy creation thus far. Framed as “a guide to anticipating the future impact of today’s technology. Or: how not to regret the things you will build,”[1] the toolkit exists somewhere between a Freshman seminar in engineering ethics, a mandatory workshop for employees, and a design checklist. Intended for “makers of tech, product managers, engineers, and others” the toolkit touts that it has “been designed to facilitate better product development, faster deployment, and more impactful innovation. All while striving to minimize technical and reputational risks.”[2] Though the toolkit is couched in the laudatory view that “as technologists, we spend most of our time focusing on how our tech will change the world for the better” it takes the risky move of suggesting that “technologists” might benefit from occasionally thinking in terms of the “glass half empty,” so that “in addition to fantasizing about how our tech will save the world” technologists spend a few moments wondering about how “it might, possibly, perhaps, just maybe, screw everything up.”[3]

The Ethical OS Toolkit presents “technologists” with three “tools” for these purposes: a series of Black Mirror like near-future scenarios to push those using the toolkit into thinking seriously about the roles, responsibilities, and potential complicity of “technologists” in bringing borderline dystopian scenarios to pass; a set of “8 Risk Zones” to specifically consider when developing a new technology so as to more carefully zero in on the particular challenges a given technology may create; and finally a set of open ended questions that aim to push the discussion beyond the specifics of a given technology and towards broader philosophical matters such as whether there needs to be “a Hippocratic oath for data workers” or whether technologists need to qualify for a “license to design.” The toolkit is peppered with “signal” points that allow users to see “how things are already becoming different,”[4] as it positions itself as a tool that technologists can use for “stretching your imagination and warming up your foresight muscles…sort of like yoga for your product ideation.”[5]

At its core there is much about the Ethical OS Toolkit that seems, at first glance at least, to be rather likable. It’s the type of tool that one can easily imagine a professor walking engineering ethics students through during one week of classes, or the type of mandatory training that a tech company might make all of its employees attend. True, the Toolkit is something of a highly abridged version of the sort of semester long engineering ethics courses that are required at many universities – and employees often zone out during mandatory trainings (regardless of the field in which they work) – but at the core of the Toolkit is a highly respectable argument that “technologists” need to do a better job of thinking about the risks of that which they create. In taking this step the Toolkit places itself within an old tradition within the philosophy of technology that highlights the important role of risk in thinking about ethics. Intentionally or not, the Toolkit seems to echo Hans Jonas’s exhortation that “it is the rule, state primitively, that the prophecy of doom is to be given greater heed than the prophecy of bliss,”[6] as well as Günther Anders’s grim prognostication that “it is our capacity to fear which is too small and which does not correspond to the magnitude of today’s danger…Therefore: don’t fear fear, have the courage to be frightened, and to frighten others, too.”[7] Thinking about risk and technology from a “glass half empty” perspective is something that has long been argued for by philosophers and critics; what is interesting is to see a Silicon Valley adjacent group like Ethical OS embracing these views instead of mocking them.

Yet the Ethical OS Toolkit does not exist in a vacuum; rather it has “been designed to facilitate better product development, faster deployment, and more impactful innovation,”[8] for today’s “technologists” many of whom ostensibly keep finding themselves in the types of messes which the Toolkit is meant to help them avoid. And while the Toolkit is openly available online, it is clear that it is primarily intended for “technologists,” those who employ “technologists,” and those who are eager to profit off of “technologists.” Thus an essential difference that separates the Toolkit from the views of figures like Jonas and Anders is that those philosophers felt society as a whole needed to adjust its relation to technology, while the Toolkit is primarily concerned with slightly tweaking the thinking of those making the technology. And as was previously noted, which should also be easily apparent based on a quick glance at the news, “technologists” have not been doing a particularly good job of showing that they know the risks of the things they create and they’ve been impressively bad at managing to assume responsibility after they’ve been caught recklessly breaking things. Or, to put it another way, the creators of the Toolkit can feel as though their tool which strives “to minimize technical and reputational risk”[9] will find a ready audience, because the tech sector is currently being roiled by scandals that are seriously harming the reputations of various companies.

To be clear, efforts that encourage “technologists” to critically engage with the risky ethical implications of the things they create are to be applauded. Yet it is worth considering just how much applause tech firms deserve for waking up to the need to think about risk when they’ve already sown a trail of destruction in their wake. Much like The Center for Humane Technology (whose founder is glowingly quoted in the Toolkit), what makes Ethical OS fail the smell test is that it seems to be a desperate distraction on the part of the tech sector. It holds up the Toolkit and insists that tech firms can use this tool to regulate themselves, it allows “technologists” to claim that they’ve thought about the risks even as they go about doing the things that they were going to do regardless. The danger of the Ethical OS Toolkit and the Center for Humane Technology is that they are sweetly perfumed smokescreens that further obscure the actions of tech firms behind claims of “humane technology” or an “ethical operating system.” And it is an alluring odor meant to distract from the fact that a rotting ideology that keeps spawning foul new problems is still going unchallenged.

Traces of this problematic ideology can be detected throughout the Toolkit (just as they could be found all over the Center for Humane Technology), and thus – for all its laudatory ambitions – the Toolkit seems to continually undermine itself. Even as it argues that “technologists” need to change it reifies the worldview that has gotten us all into this mess in the first place. One of the areas in which this is the most nakedly clear is in the Toolkit’s often repeated bromides that praise “technologists.” The Toolkit gushes about how “as technologists, we spend most of our time focusing on how our tech will change the world for the better,”[10] it reassures its audience that “most technologies are designed with the best intentions,”[11] it enthusiastically asks “ready to make the world a better place (or at least help others from making it worse)?,”[12] and it ends by asking “have you designed a product you feel confident will make the world a better place?”[13]

Such framing sets up “technologists” as a special caste of people: brilliantly and bravely marching forward to “change the world for the better.” While it seems that the Toolkit is interested in avoiding a black and white version of morality when it comes to technological risks, it is stolidly committed to maintaining that division when it comes to the “technologists” themselves. The problem is that merely by claiming that “as technologists, we spend most of our time focusing on how our tech will change the world for the better,”[14] the Toolkit thereby sneaks in an ethical argument that “technologists” are on the side of “the good.” It assumes good intentions. As a result the Toolkit gives the “technologists” a sort of ethical “get out of jail free card” by allowing them to assert, as they stand amidst the ruins, that they meant well and that they had just wanted to make the world “better.” Yet if “technologists” are to truly begin thinking ethically it seems likely that something they will have to grow out of is this childish belief that allows them to paint all of their actions as those being undertaken by well-meaning would-be saviors.

Do some “technologists” genuinely want to make the world a better place? Certainly. But this is a point of such banality that it is almost meaningless. It is feel good pabulum meant to allow “technologists” to believe that by directing eyeballs to advertisements they are somehow improving the world.

Comments regarding making the world “better” require one to consider what vision of the world is at work, and the Ethical OS Toolkit provides a hint as to the answer to that question in the form of the “8 Risk Zones” about which it feels technologists need to be concerned. These eight zones are:

Lest there be any confusion, these are serious areas of risk about which technologists genuinely should be concerned. And the fact that many of these “risk zones” seem specific to the precise problems tech companies are currently being lambasted for, does not diminish the fact that these are legitimate worries.

Nevertheless, by looking at these eight zones one can get an idea of the kind of “better” that the “technologists” have in mind. And what these eight zones reveal is that, much like the Center for Humane Technology, the Ethical OS Toolkit keeps its gaze focused on a very narrow definition of those impacted by technology in such a way that easily feeds into the self-serving narrative of Silicon Valley. For what the “risk zones” provide is a totalizing view in which the entire world of those worthy of ethical consideration are those who consume technology. Or to put it another way: one becomes worthy of ethical consideration, for this Toolkit, in virtue of one being a consumer of technology. And thus people are now owed ethical consideration due to their membership in the human community, but are instead simply customers who need to be kept happy.

Here it is useful to point out some of the extremely important “risk zones” that are not found amongst the eight: where is a consideration of those mining the minerals that are necessary for the gadgets these “technologists” design, where is consideration of the exploited laborers who work in slavery like conditions assembling these devices, where is a consideration of the environmental impacts (from energy use to e-waste)? And there are certainly other “risk zones” of this sort that need to be considered if one means more by “ethical” than lip service. But instead the Toolkit encourages “technologists” to focus on the narrow slice of the world’s population that gets to enjoy the bulk of the benefits of new technologies while avoiding the downsides (the consumers). Again, this is not to doubt the importance of these “8 Risk Zones,” nor is it to suggest that these risk zones purely amount to being “first world problems.” Rather the danger here is that the Toolkit simply serves as a way for tech companies to reassure their customers by saying “we’re thinking seriously about hateful and criminal actors” while continuing to ignore the unethical labor and environmental conditions on which the tech industry is built. If “technologists” are to be genuinely pushed into thinking ethically about risks this must go beyond simply considering the types of scandals that embarrass CEOs.

The Ethical OS Toolkit seems to largely be intended as a conversation starter, it is filled with many open ended questions that push “technologists” to think about the implications of the things they make. Yet, the Toolkit also is constructed in such a way that seems to reinforce a boundary between “technologists” and the rest of the world. There is an interesting sleight of hand that takes place in the Toolkit – it admits that the actions of “technologists” have a serious impact on society, but then swiftly silos these “technologists” off. Though these decisions will impact you, you are not invited to participate in them, instead these are conversations which are to be had within tech companies, amongst the assembled “technologists,” regardless of how these conversations are to have wide impacts on the broader society. All of which is to say, one of the things that is sorely lacking from the Toolkit is much interest in (small d) democracy. Certainly, the Toolkit is concerned with the “risk zones” that might negatively impact democratic societies, but it seems like the major concern of the Toolkit is keeping tech firms free from any form of democratic regulation. After all, a tool like Ethical OS allows a company like Facebook (for speculative example) to argue before its critics that it knows it’s screwed up in the past but now it is taking ethics seriously and is forcing all of its staff (even executives!) to be trained using the Ethical OS Toolkit! See! The company doesn’t need to be regulated, it can take care of itself! What the Toolkit provides is a way in which tech companies can attempt to regain the public’s trust at the very moment when broad segments of the public are beginning to suspect that these companies are not so trustworthy. This is a perfumed smokescreen to distract from the fact that the tech companies are still setting the world on fire – and they’d much rather pump more perfume into the smoke than call upon a publicly funded fire department.

What the Ethical OS Toolkit acknowledges is that new technologies pose risks to us all, but it doesn’t seem particularly interested in the idea that all of us should have a say in making these decisions. And this is a shame because there is a real need for conversations around technology that genuinely involve stakeholders not just stockholders – and attempts to have these conversations that freeze out segments of the impacted populations are many things, but they’re not exactly ethical. Granted, one should remember that the Toolkit was “designed to facilitate better product development, faster deployment, and more impactful innovation,”[15] in other words it isn’t actually interested in changing the status quo but in preserving the power and independence of the tech companies. It just knows that if that dominance is to be preserved that the tech companies will occasionally need to feign interest in being ethical.

Ultimately the Ethical OS Toolkit may be little more than a tool that lets the tech industry feel good about itself and which tries to trick the public into trusting those companies again. For what is largely missing from the Toolkit is a sense that there are some things that “technologists” simply shouldn’t build, a sense that the people impacted deserved a voice in the decisions that will impact them, or a vision of the world in which the tech industry is not dominant. This isn’t a toolkit for building a better world, but one for patching up the crumbling edifice of the tech industry.

In her, excellent, recent book Technology and the Virtues the philosopher Shannon Vallor argues that questions about technology are not merely about whether a particular technology (or company) is good or bad. Instead, Vallor astutely discusses the ways in which technologies are the reification of arguments about the type of society we want to live in and the types of people that we want to be. The problem that Vallor highlights is that living in the midst of a world structured by all manner of opaque high-tech gadgets “makes it increasingly difficult to identify, seek, and secure the ultimate goal of ethics—a life worth choosing; a life lived well.”[16] Thus, she argues that what is needed is the cultivation of wisdom on a global scale, emphasizing that: “we cannot lift ourselves out of the hole we are in simply by creating more and newer technologies, so long as these continue to be designed, marketed, distributed, and used by humans every bit as deficient in technomoral wisdom as the generations that used their vast new technological powers to dig the hole in the first place!”[17]

Only time will tell if the Ethical OS Toolkit proves to be a ladder out of the hole or a shovel that makes the hole deeper. But this late in the game we should not simply believe the “technologists” claim that they’re building a ladder, especially when we can plainly see that they’re sinking us lower and lower. We’re in this hole together, the tech industry isn’t going to get us out of it by itself.

 

Related Content

Be Wary of Silicon Valley’s Guilty Conscience – on the Center for Humane Technology

Living Well in the Technosocial World – a review of Technology and the Virtues

What Technology Do We Really Need?

Riddled With Questions – Interrogating Technology

The Sorcerer’s Apprentice 2.0

 

End Notes

[1] Ethical OS Toolkit, cover. https://ethicalos.org/wp-content/uploads/2018/08/Ethical-OS-Toolkit-2.pdf Note – all further citations of this Toolkit refer to this pdf.

[2] Toolkit, 4.

[3] Tookit, 5.

[4] Toolkit, 13.

[5] Toolkit, 9.

[6] Jonas, Hans. The Imperative of Responsibility. In Search of an Ethics for the Technological Age. Chicago: University of Chicago Press, 1985. Pg. 31.

[7] Anders, Günther “Theses for the Atomic Age” in Bischof, Gunter, Dawsey, Jason, Fetz, Bernhard (eds.) The Life and Works of Günther Anders: Émigré, Iconoclast, Philosopher, Man of Letters. Innsbruck: StudeinVerlag, 2014. Pgs. 189-190.

[8] Toolkit, 4.

[9] Toolkit, 4.

[10] Toolkit, 5.

[11] Toolkit, 31

[12] Toolkit, 59

[13] Toolkit, 73.

[14] Toolkit, 5.

[15] Toolkit, 4.

[16] Vallor, Shannon. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. New York: Oxford University Press, 2016. Pg. 6

[17] Vallor, 11.

About Z.M.L

“I do not believe that things will turn out well, but the idea that they might is of decisive importance.” – Max Horkheimer librarianshipwreck.wordpress.com @libshipwreck

22 comments on ““Striving to minimize technical and reputational risks” – Ethical OS and Silicon Valley’s guilty conscience

  1. Sadie-Kay
    August 24, 2018

    “have you designed a product you feel confident will make the world a better place?”
    Better — by whose definition? The Babylonian psychopaths who currently run the world? I want a world of my choosing — not theirs. No one is safe in their world. Somehow we must reclaim our power and neutralize psychopathic and technocratic decision-makers from the seats of power.

  2. Pingback: New top story on Hacker News: Ethical OS and Silicon Valley’s Guilty Conscience – World Best News

  3. Pingback: New top story on Hacker News: Ethical OS and Silicon Valley’s Guilty Conscience – Tech + Hckr News

  4. Pingback: New top story on Hacker News: Ethical OS and Silicon Valley’s Guilty Conscience – News about world

  5. Pingback: New top story on Hacker News: Ethical OS and Silicon Valley’s Guilty Conscience – Latest news

  6. Pingback: New top story on Hacker News: Ethical OS and Silicon Valley’s Guilty Conscience – New Content

  7. Pingback: “Striving to minimize technical and reputational risks” – Ethical OS and Silicon Valley’s guilty conscience – Development 5.0

  8. Pingback: Ethical OS and Silicon Valley’s Guilty Conscience – Hacker News Robot

  9. Pingback: Bookmarks for August 24th through August 25th : Extenuating Circumstances

  10. Pingback: Algorithmenethik | Algorithmenethik Erlesenes #26 - Algorithmenethik

  11. Pingback: Algorithmenethik | Algorithmenethik Erlesenes #37 - Algorithmenethik

  12. Pingback: Algorithmenethik | Algorithmenethik Erlesenes #38 - Algorithmenethik

  13. Pingback: Algorithmenethik | Algorithmenethik Erlesenes #39 - Algorithmenethik

  14. Pingback: Algorithmenethik | Algorithmenethik Erlesenes #36 - Algorithmenethik

  15. Pingback: Algorithmenethik | Algorithmenethik Erlesenes #40 - Algorithmenethik

  16. Pingback: The technology giants didn’t deserve public trust in the first place | LibrarianShipwreck

  17. Pingback: A Disastrous Year – Reflections on 2018 | LibrarianShipwreck

  18. Pingback: Blog: Industry efforts aren’t about to disrupt tech’s ethics problem – Tim McCloud

  19. Pingback: Industry efforts aren’t about to disrupt tech’s ethics problem - AI+ NEWS

  20. Pingback: Flamethrowers and Fire Extinguishers – a review of “The Social Dilemma” | LibrarianShipwreck

  21. Pingback: Progress for the status quo – on the Chamber of Progress | LibrarianShipwreck

  22. Pingback: They meant well (or, why it matters who gets to be seen as a “tech critic”) | LibrarianShipwreck

Leave a comment

Ne'er do wells

Archive

Categories

Creative Commons License