Social Media Needs a Refresh
狈别迟蹿濒颈虫鈥檚 The Social Dilemma, released last fall, provides a nuanced behind-the-scenes glimpse of how social media platforms curate personal data. Eye-opening interviews from former Google, Apple, Facebook, Twitter, and YouTube executives and engineers leave viewers with an understanding that if social media companies continue with their current operating paradigm, embarrassing status updates and accidental 鈥淟ikes鈥 will be the least of our worries. As the recent attacks on the U.S. Capitol , social media platforms can facilitate the spread of online disinformation and even fuel real-world violence.
Computer scientist Jaron Lanier states in the film, 鈥淚f we go down the status quo for, let鈥檚 say, another 20 years, we probably destroy our civilization through willful ignorance.鈥 While this statement is arguably hyperbolic, the film details specific ways that people鈥檚 behaviors are shaped using powerful feedback loops of content and engagement, where each click, each hour of scrolling, each feeling of and every confirmation of strongly held beliefs pulls users further into their online worlds. His concern, it seems, is that people are unknowingly losing their free will and critical thinking to autonomous algorithms that control their online environments. In a sense, people are increasingly beholden to their online worlds and are rapidly losing the ability to curate their own realities and experiences.
Developing technology to turn a profit is not a foreign concept in a capitalistic society, but the predictive technology employed by these companies does little to protect the well-being of consumers, and instead focuses almost exclusively on . Facebook dismissed the film鈥檚 criticisms, with spokespeople stating that the movie intentionally buried To a degree, this is true, as The Social Dilemma quite clearly used emotionally-heightened language to generate distrust in social media platforms and entertain viewers. However, the critical facts presented about data collection, analytic strategies, and the pervasive use of algorithm-driven ads are true. As Tristan Harris, Center for Humane Technology co-founder, states, "If you're not paying for the product, then you're the product.鈥
Social media companies harvest an excessive amount of user-specific data, perhaps not personally identifiable but still sensitive in nature, and when used in tandem with their proprietary algorithms, keep the user engaged. Such data are gathered under the guise of 鈥渟haring with friends鈥; showing 鈥渋nterest鈥 in certain causes, groups, or events; or even just hovering over a post while scrolling. The longer a user remains engaged with content, the more exposure they have to targeted ads. As Facebook , 鈥渟elling ads allows us to offer everyone the ability to connect for free,鈥 but the reality is that this type of manipulates . 听
Recent research has revealed the that social media platforms over individuals. In essence, social media companies the psychological in ways that are purposely and carefully crafted by design and content teams to keep users scrolling and clicking for longer periods of time. These platforms intentionally employ algorithms that are meant to be biased to generate monetary success rather than be objective for the consumer. Furthermore, we need to acknowledge that predictive technology is not magic. The film made a point of stressing that predictive technology is equivalent to the waving of a wand for results. However, if algorithms are so complex that not many understand how they work, often termed , then fundamentally using such predictive technology undermines the claim that they can trustingly be used to cause no harm.
The ways that platforms engage in algorithm design and implementation, data collection, and data monetization requires , with improved privacy standards and required impact assessments of predictive technology prior to launch. Though some headway has been made at the state level, such as the to the California Consumer Privacy Act and hearings on the of the Washington Privacy Act, there is a need for the development of a federal comprehensive privacy law that protects personal and sensitive data for consumers and dictates how data are used by companies. Third-party oversight is necessary to acknowledge the dangers of maximizing a business model that doesn鈥檛 account for consumer psychological and privacy ramifications.
A potential reform measure for the industry would be the creation of third-party review boards to ensure that there is a better balance between creating profits, as is expected with any business, while protecting the consumers鈥 psychological well-being from potentially harmful data harvesting practices and algorithms. In research settings, third-party institutional review boards (IRBs) require that activities involving people follow strict protocols and restrictions. This is possible for Big Tech as well.
Google previously established听 to help with issues related to consumer privacy rights, including an Oxford philosopher, a civil-rights activist, and a United Nations representative. However, this was still an attempt to self-regulate rather than have an objective mediator set the regulations. Facebook works with the , which is composed of 40 members from diverse disciplines and across the world. The Oversight Board is not designed as an extension of Facebook鈥檚 existing content review process. Rather, it reviews select, 鈥渉ighly emblematic cases鈥 to determine whether Facebook is following its own stated privacy policies. It may be difficult to establish a type of IRB system for all Big Tech companies, considering issues of confidentiality and proprietary predictive technology, but it is not impossible.
Additionally, to broaden accountability, social media companies should face penalties for violating civil liberties and privacy. In the United States, Congress needs to expand the enforcement powers of the Federal Trade Commission (FTC) and provide additional resources for investigations. In June 2019, the FTC announced an unprecedented settlement that required Facebook to for violations of existing privacy orders in relation to its sale of user data to Cambridge Analytica.
Normally, the FTC cannot conduct these types of investigations unless it relates to violations of an existing FTC order. The 2019 settlement stemmed from an investigation of violations of the , which focused specifically on misrepresentations of consumers鈥 privacy and sharing of user data with third parties. The FTC should be granted broader civil penalty authority to be able to investigate more instances of civil liberties and privacy violations. Additionally, the Securities and Exchange Commission should consider imposing additional requirements for greater transparency regarding advertisement counts and views, and how that data are used with the platforms鈥 algorithms to generate revenue. Violations of these types of securities laws could potentially instigate criminal investigations, which would add another layer of accountability.
Scrolling through our timelines is often more enticing than the dealing with the reality of busy and challenging day-to-day lives, especially when we are in the midst of an unrelenting pandemic and contentious start to the new Biden-Harris administration. When we are happy, we can share our joy publicly and win the attention of friends old and new. When we are angry, we can find fellow angry people to commiserate with. When we are anxious and want reassurance, we can follow those who at least claim to have solutions. What we need to realize is that the content we view and generate is also manipulated to keep us where we are, connected and enmeshed in an echochamber of like-minded, or even similarly prejudiced groups. Some that ideology rather than online echo chambers are to blame for the current divisiveness in our country. However, social media platforms undeniably play the role of willing facilitators to perpetuate the divide.
听
听
About the Author:听
Divya Ramjee is a PhD candidate and adjunct professor in 麻豆传媒鈥檚 Department of Justice, Law & Criminology. Her research focuses on the intersection of crime and technology, including the applications of artificial intelligence in the fields of criminology and criminal justice, law, and security. Her views are her own.
Dr. Margaret Cunningham is an experimental psychologist and the Principal Research Scientist for Human Behavior at Forcepoint鈥檚 X-Lab. In this role, she serves as the behavioral science subject matter expert in an interdisciplinary security team driving the development of human-centric security solutions. Her views are her own.
*THE VIEWS EXPRESSED HERE ARE STRICTLY THOSE OF THE AUTHOR AND DO NOT NECESSARILY REPRESENT THOSE OF THE CENTER OR ANY OTHER PERSON OR ENTITY AT AMERICAN UNIVERSITY.
听