The Social Dilemma: Inside the provocative new Netflix film with writer Vickie Curtis ’07
The tech industry insiders who invented infinite scrolling, the Facebook Like button, and so many of the other elements that help make social media so addicting didn’t set out to radically alter the fabric of society. But they did.
“Our job as filmmakers was to find out what tech insiders were saying about this thing they’ve created,” said Vickie Curtis ’07, one of three writers of The Social Dilemma, a wildly popular new Netflix docudrama that explores the dangerous human impact of social networking.
“What they were realizing at the time that we were filming with them was that it wasn’t that there were quirks to the thing they made that needed fixing; it was that they had helped create these three billion-tentacled monsters that are actually shifting the course of human history.”
Curtis joined Austin Jenkins ’95, an Olympia, Washington-based political reporter for the Northwest News Network, for a virtual conversation Nov. 11 about the process of crafting the film, which premiered at the 2020 Sundance Film Festival and was released on Netflix in September. More than 400 alumni, parents, faculty, staff and students took part in the live event, which was hosted by Conn's Office of Alumni and Parent Engagement.
Jenkins said that when people signed up for the virtual event, they called the film thought-provoking, eye-opening, superb and very impressive. “Someone even compared it to An Inconvenient Truth, which is, of course, the documentary about global climate change, and said it’s the most important documentary to come out in a decade,” he said.
President Katherine Bergeron introduced Jenkins and Curtis—two alumni she said “exemplify our mission of putting the liberal arts into action”—and said the turnout was indicative of the timeliness and vital importance of the topic.
“How our involvement in social media is controlling our lives is a question of growing concern among parents and educators, social psychologists, and our students, too,” Bergeron said.
Curtis said she first became interested in storytelling as a self-described “theater nerd” in high school. At Conn, she studied psychology and visual art, then went on to earn an MFA in contemporary performance and original theater from Naropa University. She also taught high school for four years, an experience that taught her how to use storytelling to unpack big issues with teenagers. She first met filmmaker Jeff Orlowski, the Emmy-award winning director of The Social Dilemma, in 2012.
“He had all this camera equipment, knowledge, and all of this sort of industry-specific knowledge, and then I had the storytelling background,” she said.
To bring The Social Dilemma to life, Orlowski, Curtis and fellow writer Davis Coombe used a fictional family and personified the algorithms making the decisions that work to keep social media users engaged.
“We, for a long time, wrestled with how to make it more cinematic, because it is a lot of talking heads, and even though everything they’re saying is compelling, it’s hard to just sit and watch talking heads—that’s not what cinema is for. It’s supposed to be the images that are drawing you in, or the characters that are drawing you in,” she said.
The film delves into how media companies like Facebook, Twitter and Google gather copious amounts of data from each user and how they use that data to make money.
“If these companies are worth hundreds of billions of dollars, why? We’re not paying them, so who is paying them? Advertisers are paying them to target their advertisements to people who will be the most susceptible to that advertisement,” Curtis said, adding that to succeed in doing this the companies have invented algorithms to drive more people to the platforms, keep them engaged longer and mine user data to better target individuals. But while it seems innocent enough to like a few photos and scroll through a feed, the results can be—and have been—catastrophic.
“The algorithm is radically indifferent, it does not know that it’s interacting with people, it does not know that people have lives outside of looking at the content that it’s feeding us. It’s just trying to put out what you are most likely to stare at, and unfortunately, because of the way our brains are wired, we are hyper-aware of things that seem to be outlandish, things that seem outrageous, things that make us feel outraged or make us feel fear or anxiety,” Curtis said.
“The algorithm has learned that those sorts of outrageous stories get more eyeballs and get your eyeballs to stay longer, and are more likely to be something you’ll share. So just purely because it’s trying to get our attention—not because it wants anything bad for humans—it is actually having really deleterious results, where we have misinformation spreading six times faster than the truth, because the truth is often much more boring and nuanced.”
That misinformation is leading to enormous social unrest, and, in places like Myanmar, has led to actual violence and genocide, she said.
So where does humanity go from here? Curtis says the response to the film, which has already been streamed by more than 38 million households in 190 countries, is a good sign that people are ready to have the discussion.
“A lot of people suspected that Facebook wasn’t good for them or have had some skepticism about these companies and what the outcomes were going to be. And now we really started to unpack what the negative outcomes are,” she said.
Curtis hopes to see a three-pronged approach to addressing the issue, through government regulation, user education and changes within the tech industry.
But can the people who created the problem in the first place really help fix it? Curtis thinks so.
“We want to start conversations with the people who are still working at these companies and see if they are open to shifting,” she said. “A lot of people who work at Facebook and Google are aware that there are problems with the way the organization works, and they are aware that the business model is out of alignment with what’s best for humanity.”
The question becomes, according to Curtis, “Can they start building safeguards into the way that tech is actually designed?”