Skip to main contentSkip to navigationSkip to navigation
BARRY FALLS for WEEKEEND 171007 tech (tinted 02 - moved figures) Illustration: Barry Falls

Who do you trust? How data is helping us decide

This article is more than 6 years old
BARRY FALLS for WEEKEEND 171007 tech (tinted 02 - moved figures) Illustration: Barry Falls

Faced with a choice of babysitters, which do you rely on: your instinct – or the algorithm that tells you to book the one in the green top?

by Rachel Botsman

My first lesson in the dangers of trusting strangers came in 1983, not long after I turned five, when an unfamiliar woman entered our house. Doris, from Glasgow, was in her late 20s and starting as our nanny. My mum had found her through a posh magazine called The Lady.

Doris arrived wearing a Salvation Army uniform, complete with bonnet. “I remember her thick Scottish accent,” Mum recalls. “She told me she’d worked with kids of a similar age and was a member of the Salvation Army because she enjoyed helping people. But, honestly, she had me at hello.”

Doris lived with us for 10 months. For the most part she was a good nanny – cheerful, reliable and helpful. There was nothing unusual about her, aside from a few unexplained absences at weekends.

Back then, our neighbours, the Luxemburgs, had an au pair Doris spent a lot of time with. Late one evening, Mr Luxemburg knocked on our door after discovering the pair had been involved in running a drugs ring. “They had even been in an armed robbery,” my father later related, “and Doris was the getaway driver.” The getaway car, it transpired, was our family’s Volvo estate.

My parents decided to search Doris’s room. In a shoebox under her bed, she had stuffed piles of foreign currency, stolen from my parents’ home office. My dad stood on guard by our front door all night with a baseball bat, scared Doris would come home. Thankfully, she didn’t.

“Even as I retell this story, I feel sick,” my mum says. “I left you in the care of a serious criminal. And it took us so long to know who she really was.” Looking back, what would she have done differently? “I wish we’d known more about her.”

My parents are generally smart, rational people. Would they have made the same mistake in today’s digitally connected world? Maybe not. A growing band of technology companies are working on helping us decide who we can and can’t trust – whether hiring a nanny, renting out our home or ordering a taxi. Technology today can dig deeper into who we are than ever before. Can an algorithm determine who is the real deal and who can’t be trusted, better than us?


On a crisp autumn morning, I visit the modest offices of Trooly in Los Altos, a sleepy backwater city north of Silicon Valley. Savi Baveja, Trooly’s CEO, wants to show just how powerful these new trust checks can be. “What do you think of me running you through the Trooly software to see what comes up?’ he says, smiling encouragingly.

I blush, trying to recall all the bad or embarrassing things I’ve ever done. My many speeding and parking tickets? The weird websites I spend time on (for research purposes, of course)? Old photos?

I laugh nervously. “Don’t worry – we can project it on to the large screen so you can see what is happening in real time,” Baveja offers. Somehow I don’t find that reassuring.

Anish Das Sarma, Trooly’s chief technology officer and formerly a senior researcher at Google, types my first and last name into the Instant Trust program, then my email address. That’s it. No date of birth, phone number, occupation or address.

“Trooly’s machine learning software will now mine three sources of public and permissible data,” Baveja explains. “First, public records such as birth and marriage certificates, money laundering watchlists and the sex offender register. Any global register that is public and digitised is available to us.” Then there is a super-focused crawl of the deep web: “It’s still the internet but hidden; the pages are not indexed by typical search engines.” So who uses it? “Hate communities. Paedophiles. Guns. It’s where the weird people live on the internet.”

The last source is social media such as Facebook and Instagram. Official medical records are off limits. However, if you tweeted, “I just had this horrible back surgery,” it could be categorised as legally permissible data and used. Baveja and his team spent nine months weighing up what data they should and should not use. Data on minors was out. “In some countries,” he says, “there is a legally agreed definition of the difference between ‘private’ and ‘sensitive private’ information – the latter includes medical, plus race, religion, union membership, etc. The latter is where we drew the line, as we were very aware of the creepy factor.”

After about 30 seconds, my results appear. “Look, you are a one!” Baveja says. Profiles are ranked from one to five, with one the most trustworthy. “Only approximately 15% of the population are a one; they are our ‘super-goods’.”

I feel relief and a tinge of pride. How many are “super-bad”? “About 1-2% of the population across the countries Trooly covers, including the US and UK, end up between five and four.”

Baveja was previously a partner at the consulting firm Bain & Company. One of his longest-standing clients was a “well-known online marketplace”. It started him thinking about the importance of trust in the digital world. “Our client needed 6% of their entire budget – hundreds of millions of dollars – to respond to things going wrong in their marketplace,” he says. “It got me thinking how the typical star rating system was not adequate to prevent a very large number of incidents online.”

Meanwhile, Baveja’s wife was running a small dental practice. People would refuse to pay, or threaten to leave bad reviews, and at the weekend there would be callers demanding drugs. “It occurred to me that small businesses, relative to big businesses, know very little about their customers,” Baveja says. “Wouldn’t it be cool if they had a way of weeding out potentially bad ones?”

To get my trust score, Trooly’s software crawled more than 3bn pages of the internet, from around 8,000 sites, in less than 30 seconds. The data was consolidated into three buckets. The most basic verified my identity. Was I who I claimed to be? This is done by scanning, say, my personal website against my university profile. Next was screening for unlawful, risky or fraudulent activity. But it’s the third category that is fascinating, in which I was assessed against the “dark triad”, a trio of callous personality traits that make con artists tick: narcissism (selfishness with excessive craving of attention), psychopathy (lack of empathy or remorse) and machiavellianism (a highly manipulative nature with a lack of morality). Unfortunately, Baveja can’t give me a separate score here, but it’s safe to say I passed.

Trooly was awarded a US patent two years ago for this software, “determining trustworthiness and compatibility of a person”. Its algorithm was also programmed to take into account the “big five” traits – openness, conscientiousness, extraversion, agreeableness and neuroticism – widely accepted by researchers in the 80s as a key way to assess personalities. “Trooly developed sophisticated models to predict these traits using hundreds of features from an individual’s online footprint,” Baveja says. “It was interesting figuring out what, in that footprint, might help predict if someone is going to be, say, neurotic or rude. If you look at someone’s Twitter account and it’s peppered with excessive self-reference and swearwords, the person is much more likely to be antisocial.”


I remember a heated conversation I had with my father when I was 18. I had seen a nice-looking secondhand Peugeot for sale on eBay. He pointed out that the seller’s pseudonym was Invisible Wizard, which did not inspire confidence. So we went to the local car dealer instead.

These days, even my cautious father is something of an eBay addict. And as a society we are increasingly using technology for more intimate personal interactions, often with total strangers, whether it’s sharing our homes and cars, or finding love or babysitters online. But when you first connect with someone, how can you know if they pose a risk? Are they who they say they are? Is it even a real person?

Savi Baveja, CEO of trust check company Trooly. Photograph: courtesy of Savi Baveja

These are questions companies such as UrbanSitter need to answer. This American website connects families with babysitters, and has more than 350,000 parents and 300,000 sitters on its books. As with many online services, they have to join via Facebook or LinkedIn. The result, says 43-year-old founder Lynn Perkins, is that “when you go to book, you can see how many ‘friends’ have previously booked or are in some way connected to that sitter”.

Sitters and parents create detailed profiles. The amount and type of details people will voluntarily disclose are extraordinary. “One parent included a long description about their pot-bellied pigs,” Perkins recalls. “They wanted to make sure the sitter was comfortable with their pets. It sounds weird but is great expectation management.”

The final level of security is provided by Trooly and rival trust rating services such as Checkr. Did the sitter really get a childcare diploma from South Thames College? Does he or she use foul language online? Is he or she on a sex offender register?

Only 25% of the sitters who start the registration process make it on to the platform. The remaining 75% are rejected or give up because they can’t or won’t jump through the vetting hoops.

Things do go wrong now and again in this brave new world, but it is almost impossible to get entrepreneurs such as Perkins to reveal the precise number of bad incidents, minor or serious, that happen on their platforms. “They do happen but they are extremely rare,” is all she will say.


In June, Airbnb acquired Trooly for an undisclosed sum. It’s part of its investment in troubleshooting, along with people such as Nick Shapiro, a former deputy chief of staff at the CIA who joined Airbnb in 2015 as global head of trust and risk management. “Earning and keeping trust will always be a core part of any functional society,” Shapiro says. “What has changed is where and how trust is exchanged. Rather than placing trust in major institutions like big business, the media or government, people are trusting each other more and more.”

Part of his brief is to figure out how Airbnb responds when things go awry. Take someone like David Carter, who booked a luxury New York apartment for the weekend in March 2014. He told the host, Ari Teman, he was looking for a place for his brother and sister-in-law to stay while they were in town for a wedding. In fact, his “in-laws” turned out to be guests for a rowdy orgy featuring “big beautiful women” and stuffed animals. Teman discovered the X-rated soiree only after he popped back into his building to pick up his luggage.

Carter had a verified Airbnb account and positive reviews. “We have a responsibility to go back and see if we could have done anything differently,” Shapiro says. “Was a mistake made?” In other words, how can the world’s Carters be weeded out?

“You could have easily figured out something like that was going to happen,” Baveja says. “You didn’t need to go deep into Carter’s psyche to work out this guy was a professional organiser of sex parties.” A simple Google search on Carter’s email address led to online ads for events such as Turn Up Part 2: The Pantie Raid, and BBW Panty Raid Party. One person even gave the apartment’s address in a tweet for an “XXX FREAK FEST”.

“I learned new acronyms looking at his profile,” Baveja says. “I’m blushing just thinking about it.”

I tell Baveja about Doris. Would the search have caught her if she had applied to UrbanSitter? It would. My parents would have known she didn’t belong to the Salvation Army, had no previous childcare experience and a patchy criminal history. Doris would not have made the cut.


Should we embrace these new trust algorithms? Baveja and Shapiro acknowledge the responsibility that comes with trying to take ethical decisions and translate them into code. How much of our personal information do we want trawled through in this way? And how comfortable are we with letting an algorithm judge who is trustworthy?

At my Trooly test, I found myself worrying about tiny or long-ago “transgressions” being held against me. Do companies take note of those?

“No one likes to be judged, whether by a robot or another person, but that isn’t what our screening is about,” Shapiro insists. “We are looking for major risks such as hate group membership, a violent criminal past or a fake identity. We don’t care if you sent a stupid tweet or got a parking ticket.”

Still, those things might influence an employer. Increasingly, recruiters are using digital footprints and machine learning to filter candidates. Recruitment tools such as Belong, Talview and the Social Index are offering products that aggregate and analyse online data to determine the “fit” of a candidate with a company’s culture.

There are other questions. What, for example, are the consequences for “digital ghosts”? People like my husband, who has never used Twitter or Facebook or LinkedIn. Does his “thin file” reduce his ability to be considered trustworthy?

“For 10-15% of people, we can’t give a confident score,” Baveja admits. “There’s either not enough of a digital footprint or not enough accurate inputs.”

On sites such as Airbnb, invisibility will not necessarily work against you. “We are looking for derogatory information,” Shapiro says, “and the absence of information doesn’t count against you.”

In a world in which we can find someone to fix a leak or drive us home or date with a few swipes of our phones, online trust is set to get faster, smarter and more pervasive. The first time we put our credit card details into a website, say, or find a match on Tinder, it feels a bit weird, even dangerous, but the idea soon seems normal. Can technology strip out all the risk of dealing with strangers?

No way, says UrbanSitter’s Perkins. Humans are complex moral beings, and it would be foolish to remove ourselves from the picture entirely. “If a sitter shows up and you get a weird feeling, it doesn’t matter if they have passed checks, how well reviewed they are or what you thought about them online, go with your gut and cancel.”

If the onus is still on us to decide where to place our trust and in whom, we are now in a better position to ask the right questions and find the right information. Which should mean we’re much less likely to hire a narco-bank robber as a nanny.

Rachel Botsman is the author of Who Can You Trust?, published by Portfolio Penguin at £14.99. To order a copy for £12.74, go to guardianbookshop.com or call 0330 333 6846.

Most viewed

Most viewed