top of page
Writer's pictureRobyn Norrah

Essay | Algorithmic Impact

Updated: Oct 26, 2024

On the known effects of social media on its users. Written for PHI 300: Philosophical Argument/Exposition with Professor Braud at Arizona State University on October 8th, 2021. Final Grade: A.


We have been partaking in the world’s most extensive social experiment. Twenty-four years ago, the internet launched the first social networking platform on the six-year-old World Wide Web. This concept brought communication and marketing into a unique new form of media, taking the internet by storm from Six Degrees to MySpace, Facebook, Instagram, Snapchat, Twitter, and TikTok. There was no telling what or where these concepts would go or how they would affect individuals or our society back then. Now, social media technologies are some of the fastest-growing, most lucrative businesses that exist today. Globally, 4.48 billion people use social media, with 82% of America’s entire population online. With so many people active and in connection through social media, we can see the time consumed and the interactions made on these platforms leverage more than how individuals present themselves to others; they shape entire perspectives of self and worldviews.

The best we could do was hypothesize in the past, but we now have enough data and discovery to accurately understand the impact these technologies have made on human psychology and sociology. We are entering a new era of questioning the consequences of these applications, one that drives within research— not just speculation. The desire to recognize how social technologies affect users has been growing as public concerns continue to rise. Documentaries, books, and news articles have been undertaking these issues and gaining popularity, forcing social platforms to respond and lawmakers to get to work.

Currently, no policies hold social networks accountable for their users or any of the outcomes their applications may motivate. Nevertheless, temperatures in Washington D.C. have been mounting as a whistle-blower, Frances Haugen, leaked hundreds of Facebook and Instagram internal studies to media and law enforcement before giving an overwhelming testimony to Congress. We will observe the potential correlations between these published internal documents and previous studies conducted by independent researchers. We will strive to identify dormant disparities between results and hope to uncover clarity in any distinguished similarities. The ultimate intention of this survey is to field for certainty. By gaining awareness of specific truths, we can start to ask more relevant and vital questions.

The Attention Economy

Social networks operate similarly, using algorithmic means that occupy artificial intelligence, machine learning, natural language processes, and predictive analysis. These mechanisms serve the algorithm to learn more about its environment, curating a custom space to reflect the interests and aspirations of individual users. Every action taken on every form of media collects as data that then inputs into a feedback loop. Algorithms quickly learn and respond to users’ likes and dislikes based on how they interact or do not interact with content. The objective is to keep users engaged with the platform for as long as possible as social networks monetize on attentiveness.

Social media platforms, like any other company, are in the business of making money. How they do this is by selling advertising space. The more users supply impressions and interactions with an advertisement, the larger return social platforms obtain on that sale. Social media algorithms aim to maintain and entertain users long enough to capitalize on users’ use of their applications. The downside to businesses that employ this model is that they run on artificial intelligence learning and computing with very little human input. We can only know the effects of these operations by observing the outcomes. In this research, we hope to find insight into how these mechanics may influence how we think, perceive, and cope with the natural world. The overall trends in the selected experiments highlight various effects observed as psychological and sociological consequences of using social media technology.


Decoding Problematic Use

Addiction is more than a behavioral observation; it is a neurological one involving the chemical release of dopamine in the brain. Dopamine is known to issue after using drugs, gambling, shopping, eating, having sex, and even experiencing positive social interaction. When the chemical is delivered, it travels through pathways or reward systems in the brain that alter and motivate behavior. Social media networks stimulate these cognitive responses, dropping hits of dopamine through numerous interactions. The Diagnostic and Statistical Manual of Mental Disorders (DSM) does not recognize social media addiction as a mental disorder, but psychologists repeatedly study this topic. Compiling outcomes from more recent studies, the consensus is that around 10-15% of users may have an addiction to these applications. They are attributing blame to engagement-driven algorithms that study users’ interests to optimize for repeated use. Habit-reducing advice is spread across numerous healthcare websites, suggesting that users dilute their time on social applications and turn off their notifications. Meanwhile, most social media companies do not consider themselves liable for the result of users’ expressed addictive behaviors.

Internal studies of Facebook and Instagram show themes of “problematic use” or addiction, as defined in a testimony to the United States Congress by Frances Haugen. Their studies targeted users experiencing “hard life moments” to understand how their platforms, Facebook and Instagram, may be supporting or hurting individuals during these times. The study results are somewhat skewed as they focused on a refined subset of users, but even within their findings, it is clear that users feel these applications negatively contribute to their behaviors, thoughts, and perceptions. Of the users surveyed, 31% believed that the applications made problematic use worse. This finding is far more somber than observed in the cumulative of earlier independent studies. Critics of the attention-economy design have pointed out that independent researchers are limited in their attempt to untangle questions around addiction because they have to rely on self-reported data. In comparison, companies like Facebook hold hundreds of petabytes of user data that could potentially speak volumes about behavioral patterns. Exaggeration could be possible in the dataset provided by Facebook’s leaked documents, but with so many unknowns around how they target users for their studies and why, this information could theoretically hold more relevance at a second glance. Unfortunately, no additional clarity is available around those specifics, leaving us to assume further speculation and encourage more exhaustive research. The only certainty we can grasp around available analysis’ today is that there are addictive components that exist in this technology, and changes to policy and function to recognize the negative impacts or curb the effects on users are not happening.

On Edge of Prediction


In the social atmosphere of engagement ranking, content amplifies on algorithms not just by specific users’ interests but by being categorically controversial. When a post on social media has virality, it comes to gain its weight by a sum of engagement. This measurement is known as MSI, or meaningful social interaction, aggregating likes, views, comments, and shares. Algorithms read and distribute this content based on predictive analysis: if a certain kind of content performed well before, it might perform well again. Alternatively, if a specific type of media performs well for a specific group of users, it will perform well for those users or other users like them. This type of content follows distinct patterns that can only be observed in hindsight as they develop through machine learning. Researchers have found that some of those outcomes can significantly harm individual users and society at large. Proactive incident response studies can test how algorithms are patterning by creating bot accounts or fake profiles to evaluate what content displays after a set of particular actions. Facebook created a fake Instagram account and started following and liking posts about healthy lifestyles in one example. What was discovered was that the algorithm began to show posts revolving around disordered eating and anorexia shortly after. This result is not shocking to Angela Guarda, director of Johns Hopkins Hospital's eating disorders program and associate professor of psychiatry at their School of Medicine. She told the Wall Street Journal that many of her patients have expressly acquired tips on restricting food and purging through social media. These effects distinguish themselves in further research conducted across the globe, where findings show a relation to social media use and disordered eating. Whether social media algorithms themselves perpetuate these damaging narratives or individuals cause these functions to perpetuate them is still uncertain in judgments today. We know that there are correlations, and these algorithmic cause-and-effect scenarios do not stop at disordered eating, as other mental disorders of self-harm and suicide have similarly been studied and linked to social media use.

Findings that social media applications influence the mind may considerably impact our world, especially in news, politics, and human rights. On Twitter, a Microsoft artificial intelligence chatbot named Tay created an account and promptly learned to post misogynistic and racist textual content by observing the platform for only 16 hours. In another incident response study from Facebook, researchers saw that after an account followed distinct political leaders, extremist group pages were suggested to the fake user by their algorithms. These experiments demonstrate how algorithms can contain dangerous biases that perpetuate negative and often divisive perspectives. When algorithms are only concerned with metrics or meaningful social interactions, engagement is the number one priority regardless of the content’s sentiment.

An internal memo from Facebook cited, ”misinformation, toxicity, and violent content are inordinately prevalent among reshares.” News media sites express their frustration with this apparent skew of virality, noting that 80% of their content now focuses on negative and polarizing issues to keep up with Facebook’s algorithmic preferences. Facebook has been aware of these issues but has performed limited actions to combat them. Matters surrounding this topic have witnessed political unrest, from the Cambridge Analytica incident to an actual genocide of people in Myanmar. Researchers and critics continue to point out that these social media algorithms are flawed, lacking the aim to unite and inspire people while feeding off the reach that hate-driven content elicits.

Machine learning through natural language processes and predictive models can be dangerous to society as it encourages seriously harmful and compulsive content. While, in many ways, this programming has dramatically enhanced civilization, they are still very limiting computational methods. Algorithms can read and comprehend language but still do not understand tone or underlying intention. This aspect of human language makes humanity unique, but it is very unreliable when deciphering intent on the internet. Mechanisms are built into social media algorithms to protect the public from such thoughtless predictive calculations, yet platforms like Facebook invest more in English learning models than any other language. As a result, Facebook algorithms can not detect or process all content posted or shared in many other languages, allowing for potentially damaging content to slide through the system. As we frequently observe this occurrence on Facebook platforms, we can not help but wonder how other algorithms on alternative social sites may inadvertently pull against individuals or entire populations. Could they lead users to look for inspiration to curve appetite or even join a brigade to commit genocide?

Algorithms identify a theme and work to bring content to users to continue their use of the application. The difference seems stark to us humans, but in the eyes of artificial intelligence, these paths towards self or public destruction are somehow the most efficient ways to engage users obsessively and for the long haul. One thing is sure, the virality of content on social media platforms can give us great insight into how and why social algorithms are learning to promote in users’ feeds.

Aspects of Limitation

The problem with these studies is in what we do and do not know. On the one hand, we have insight into two of the largest social media networks, Facebook and Instagram, but this information is internal. None of the datasets or explicit information on these research initiatives is known. Studies that form outside the realm of these companies are working with surveyed information provided by their subjects. They have no insights into their subjects’ past social media history, engagements, or sentiments. However, even without these factors, we see some consistent trends. From the internal perspective, there are calculated and calibrated risks involved between users and the algorithm. From the external perspective, there are striking correlations between users and their relationships with social media applications. Either way it is spun, there are concerning consequences involved. Attention-driven economic practices should be evaluated and monitored for individuals and society's safety.

Questions inspired by these observations sit at the surface. If so many people can misdirect into addictive behaviors, self-harm, or hate, what other social media areas may influence the way users think and perceive themselves and the world? Do we benchmark addiction, self-harm, and hate at these irreversible actions of suicide and genocide, or can we set the bar higher? How do we measure when these applications are enforcing overly persuasive actions and narratives? When could a user even know that something is not their thought but the influence of an algorithm? It would be interesting to research similar subjects or users with paralleled interests to observe what course of action they would take on versus offline. Or else, how could we know how much these technologies play into our decision-making and beliefs?

Conclusion

Critics may lay claims that these systems are safe as they aim to entertain. Like any other form of entertainment today, there need to be some protections to safeguard younger, developing populations and inform all other users. Mediums of television, radio, and print all have policies built around them to minimize the effects of harm that they do on populations while grounding our rights to freedom of speech. Regulations require radio stations to obtain a license, television must accommodate parental guidelines, and defamation laws protect people from lies spread in fake news articles. Never in the history of humanity have we seen unregulated media technology gain such influence over our entire world. The attention economy intentionally enforces negative addiction-forming behaviors and influences violent behavioral responses. It is high time to scrutinize social media platforms and build legislation around them to protect users from further harm.

Recent proceedings in Congress show a growing interest in making such legislation possible to protect the most vulnerable populations affected by these technologies, like children and teens. Talk of making datasets publicly available for independent scientists and researchers to study could levee an abundance of academic understanding in the future. Prospects of further research and transparency from all social media providers could offer this review a new scope of this investigation. It is uncertain what will come of this investigation and the conversations around them or how this will impact social media networks, but it is a step toward advancing in making these spaces safer for everyone involved.


Works Cited

Bhargava, Vikram R., and Manuel Velasquez. “Ethics of the Attention Economy: The Problem of Social Media Addiction.” Business Ethics Quarterly, 2020, pp. 1–39., https://doi.org/ 10.1017/beq.2020.32.

“The Facebook Files; A Wall Street Journal Investigation.” Wall Street Journal (Online), 15 Sept. 2021. ABI/INFORM Collection; Global Newsstream, http://login.ezproxy1.lib.asu.edu/ login?url=https://www-proquest-com.ezproxy1.lib.asu.edu/newspapers/facebook-files- wall-street-journal-investigation/docview/2572526408/se-2?accountid=4485. Accessed 5 Oct. 2021.

Peter Dizikes | MIT News Office. “Study: On Twitter, False News Travels Faster than True Stories.” MIT News | Massachusetts Institute of Technology, 8 Mar. 2018, https:// news.mit.edu/2018/study-twitter-false-news-travels-faster-true-stories-0308.

Sanford, Claire. “Facebook Whistleblower Frances Haugen Testifies on Children & Social Media Use: Full Senate Hearing Transcript.” Rev, Rev, 6 Oct. 2021, https://www.rev.com/blog/ transcripts/facebook-whistleblower-frances-haugen-testifies-on-children-social-media- use-full-senate-hearing-transcript.

Weinstein, Emily. “The Social Media See-Saw: Positive and Negative Influences on Adolescents’ Affective Well-Being.” New Media & Society, vol. 20, no. 10, 2018, pp. 3597–3623., https://doi.org/10.1177/1461444818755634.

“What Our Research Really Says about Teen Well-Being and Instagram.” About Facebook, 30 Sept. 2021, https://about.fb.com/news/2021/09/research-teen-well-being-and-instagram/. Zendle, David, and Henrietta Bowden-Jones. “Is Excessive Use of Social Media an Addiction?” BMJ, 2019, p. l2171.,https://doi.org/10.1136/bmj.l2171.


15 views0 comments

Recent Posts

See All

Comments


bottom of page