Why Are Early Facebook And Google Employees Rallying To Protect Kids From Social Media
Algorithms are driving behaviors that may very well be inadvertently targeting society’s most vulnerable
PHOTO CREDIT: Getty Images
PHOTO CREDIT: Getty Images
Social media is one of those topics that divides people. Some see it as oxygen--they can't live without it--and others see it as Sarin gas--not wanting to get near it. As with most topics that polarize people the truth lies somewhere in between those two extremes. But finding that truth isn't easy since so much of what drives social media is buried in algorithms that seem to be protected better than state secrets.
Which is why a recent coalition, started by former early employees at some of the largest social media companies, such as Facebook and Google, is generating more than a bit of interest as it tries to raise awareness of the potentially negative effect that social media is having, especially on younger users.
According to a recent article in the New York Times, the coalition, called the Center for Humane Technology, has brought together an impressive group of social media pioneers, leaders in tech, F500 partners, and media in an effort to educate and lobby against what they call "tech-addiction" among youth.
Truth About Tech (the name of the campaign that the Center is embarking on) is targeting 55,000 public schools (in the US) and already has $57 million in capital and donated media outreach. The funds have been earmarked to educate parents, teachers, and students about the dangers of social media.
"While relationships between humans are subject to ethics, morals, shared values, and a social conscience, machines have none of these."
$57 million may sound a bit extreme. After all we're just talking about social media, this isn't a campaign to raise awareness about illegal drugs or the abuse of prescription opioids. Yet, if you listen to what many experts are telling us the impact that social media is having on children may be no less worthy of concern.
Tristan Harris, a former Design Ethicist at Google, who The Atlantic Magazine called the "closest thing Silicon Valley has to a conscience," currently heads the Center for Humane Technology. According to Harris, and as reported in the NYT article, "The largest supercomputers in the world are inside of two companies -- Google and Facebook -- and where are we pointing them?" Mr. Harris said. "We're pointing them at people's brains, at children."
That's one hell of a visual. The Center's website doesn't hold back either. Emblazoned on one of the Center's web pages is what reads like a revolutionary manifesto:
Our society is being hijacked by technology. What began as a race to monetize our attention is now eroding the pillars of our society: mental health, democracy, social relationships, and our children.
The image you get is one of social media overlords manipulating our brains and, in an ironically poetic twist, our society. While I don't believe the reality is quite as intentionally malicious as that image portrays, the clear business model of every social media company is driven by one fundamental objective; understand your users better than they understand themselves, whatever it takes!
In doing research for my upcoming book, Revealing The Invisible, one aspect of social media came up consistently, the obsessive focus of social media companies to develop a highly personalized understanding of users and their behaviors. There are clearly benefits to that, such as predicting behaviors and buying preferences, but there's also the potential for abuse This is the first time we've given technology the power to build a behavioral relationship with humans, which not only involves human behaviors but also the evolution of machine behaviors. And the latter is increasing in its power much faster than the former.
This is where we need to be vigilant. While relationships between humans are subject to ethics, morals, shared values, and a social conscience, machines and algorithms have none of these. Analytics and empirical algorithms are driven by finite measurable goals, such as clicks, engagement, views, and profit. In addition, the behaviors that are being incentivized by social media, such as the accumulation of "likes" and "friends," may actually play best to those most vulnerable. Algorithms are apathetic, they only know to achieve a goal. They cannot tally the human toll.
An article in The Guardian, Stress and Social Media Fuel Mental Health Crisis Among Girls, went much further to, correlate NHS data showing a "68% rise in hospital admissions because of self-harm among girls under 17 in past decade" to the concurrent rise of social media.
"Among those who used electronic devices five or more hours a day, 48 percent had at least one suicide-related outcome."
The authors claimed that an "Increasing numbers of academic studies are finding that mental health problems have been soaring among girls over the past 10 - and in particular five - years, coinciding with the period in which young people's use of social media has exploded."
Another Guardian article even called out specific social media platforms, including Facebook, Twitter, and Snapchat and identifying Instagram as the "most damaging" to young people's mental health based on data from a survey of 1,500 14-24-year-olds. (To be fair, the survey did indicate that YouTube had a positive psychological impact.)
Even more disturbing research, published by the Association for Psychological Science, claimed that, "Among those who used electronic devices five or more hours a day, 48 percent had at least one suicide-related outcome. Thus, adolescents using devices five or more hours a day (vs. 1 hour) were 66 percent more likely to have at least one suicide-related outcome."
Those numbers are chilling.
So, what are we to make of all this?
A few things are clear. We really don't know exactly what social media is doing to our kids' brains and behaviors , at least no more so than my parents knew what TV was doing to mine. Although, equating generic one-size-fits-all broadcast TV to the uber-personalized, targeted, and potentially psychologically manipulative capabilities of social media is hardly a fair comparison.
We also have no societal or governmental oversight or regulation specifically intended to monitor and manage social media. We're using old tools to deal with brand new problems. While I'm hardly a proponent of ever more government, there are areas where at least developing societal awareness and at most putting in place regulatory controls to monitor and protect a society, is warranted. We can argue the specific mechanisms but it's clear that so far, self regulation isn't working.
Lastly, we shouldn't lose sight of the fact that, even with these potentially harmful effects, there is positive power in the use of social media. It has been at the heart of political revolutions, such as the Arab Spring, it has given a voice to individuals who have been wronged and those socially disenfranchised, such as the impact it has had on raising awareness of the #metoo movement, or the GoFundMe campaign that raised over $200,000 for a homeless veteran. Isolated incidents? Hardly. Social media has created a new form of capital that can be used to create influence spur action.
Ultimately no technological advance is all positive or all negative. Our responsibility, as technologists and as members of an open society is to make sure that the positives adequately outweigh the negatives. And that only occurs if we illuminate all sides of the conversation.
In that light, what the Center for Humane Technology is doing to invest in the value of an open and transparent social conversation about social media, is certainly a significant step in the right direction.