*** This article explores the multifaceted risks associated with social media and its impact on vulnerable populations in the digital age. Despite the transformative power of platforms like Meta and TikTok, concerns about their potential harms have reached critical levels. Issues such as child safety, predatory behaviors, addictive designs, and exposure to harmful content are central to ongoing debates. Social media has also amplified societal challenges, including the spread of misinformation and the escalation of mental health crises among youth. Notably, the ease of accessing illegal drugs via social platforms has contributed to record overdose deaths, underscoring the need for robust regulatory measures. The article examines the role of user-generated content in perpetuating risks, such as the proliferation of intimate image abuse and cyberbullying, which disproportionately affect young individuals. Drawing on legal frameworks and technological advancements, the author highlights the need for collaborative solutions involving governments, industry, and civil society. Ultimately, the article calls for a balanced approach to harness the benefits of social media while safeguarding against its darker implications. ***
Introduction
On January 31st, 2024, CEOs of major social media companies including Meta and TikTok faced the Senate Judiciary Committee to address concerns about the impact of their platforms on young people. Issues like child safety, predatory behavior, addictive features, and harmful content like suicide and eating disorders were at the forefront. Despite the transformative nature of social media, concerns have escalated over its negative effects, with a growing focus on protecting users, particularly vulnerable individuals (Cheung et al., 2015; Kapoor et al., 2018; Richey et al., 2018). Former Facebook executive Chamath Palihapitiya has publicly expressed remorse for his role in creating tools that he believes are damaging societal cohesion. This underscores the tension between the positive aspects of social media and its dark side/potential harm to individuals and communities.
The rise of fake news and misinformation on social media has raised significant concerns about the integrity of information shared online, particularly during events like the COVID-19 pandemic (Laato et al., 2020). This phenomenon, often referred to as an “infodemic” by the World Health Organization, leads to confusion and mistrust among the public as false or misleading information spreads rapidly across digital channels.
The story of Cicero, the renowned lawyer and orator of ancient Rome, who was slain by Mark Antony’s soldiers as retaliation for his speeches, highlights the enduring struggle for truth (Grasso, 2022). In today’s Digital Age, access to instant information exposes us to new forms of truth manipulation, with fake news whether intentionally fabricated or due to errors spreading rapidly on social media platforms. Despite efforts to combat misinformation, fake news often outperforms legitimate sources in garnering attention, as seen in events like the 2016 U.S. presidential election (Muhammed & Mathew, 2022). Social media platforms like Twitter and Reddit, while facilitating communication, also harbor issues such as cyberbullying and harassment, posing challenges to civil discourse and truth dissemination (Baccarella et al., 2018).
Furthermore, concerns have arisen regarding unsettling content on social media platforms, particularly regarding the exchange, distribution, and reception of user-generated content (UGC). Platforms like Flickr, YouTube, and Instagram facilitate the sharing of photos, videos, and conversation-generated content (Berthon et al., 2015; van Dijck, 2009). However, this ease of sharing poses risks, including the spread of inappropriate or unauthorized content, such as violent or pornographic material. A survey of 10,000 European children aged 9 to 16 revealed that 40% expressed shock and disgust when encountering such content shared by others online (Livingstone, Kirwil, Ponte, & Staksrud, 2014). This underscores the need for platforms to address the challenges associated with managing and regulating user-generated content to ensure a safer online environment.
Drug Deaths and Social Media
Social media has exacerbated the drug overdose crisis by making it easier for teenagers to access illegal drugs from the comfort of their bedrooms. The convenience of buying drugs online has contributed to a record number of overdose deaths, with the Centers for Disease Control and Prevention (CDC) recording 103,550 overdose deaths in the U.S. in November 2022. The New York Times highlighted that many of these deaths involve counterfeit drugs laced with fentanyl, a deadly synthetic opioid. Fentanyl, 50–100 times more potent than morphine, is odorless and tasteless, making it extremely dangerous (CDC, 2019). The crisis intensified significantly between 2013 and 2014, with a 426% increase in fentanyl-related deaths, severely impacting public health and life expectancy, especially in places like British Columbia, Canada (Weiner et al., 2017).
Addressing the issue of drug-related threats and exploitation on social media platforms is complex. Despite the alarming statistics and events that underscore the severity of the problem, safeguarding against these challenges involves several significant obstacles. To mention a few, social media’s anonymity, the rapid dissemination of information, and the platforms’ vast reach make it difficult to regulate and monitor illegal activities effectively. Additionally, the constant evolution of online drug markets and the sophisticated methods used by sellers to evade detection further complicate efforts to protect vulnerable individuals.
Firstly, the anonymity and global reach of social media platforms make it difficult to identify and track down perpetrators. Drug dealers often use encrypted messaging apps and private accounts to evade detection, making it challenging for authorities to monitor and intervene in these transactions. Furthermore, the use of emojis and coded language to sell illegal drugs discreetly adds another layer of complexity to the issue. For example, a pill, parking sign, banana, or blue circle may represent Percocet or Oxycodone. They also use coded language like “Xanax with a ‘Z’” or “Percocet with two ‘t’s’” to avoid detection. These dealers often target vulnerable individuals, exploiting their struggles with anxiety or other issues to lure them into buying drugs through social media platforms like Snapchat or TikTok.
Secondly, the scale of the problem of harmful content on social media is immense, with millions of users potentially exposed to various threats. The sheer volume of user-generated content makes it impractical to rely solely on manual moderation. This necessitates the implementation of automated solutions that can efficiently detect and flag potentially dangerous content (Kim et al., 2017). Artificial Intelligence (AI) and Machine Learning (ML) technologies, including Natural Language Processing (NLP) and Deep Neural Networks, are increasingly being deployed to address these challenges. These automated methods offer a scalable approach to content moderation, which is crucial given the limitations of manual and semi-automated moderation techniques.
Thirdly, the issue of drug-related threats on social media is interconnected with broader societal issues, such as mental health, addiction, and poverty. Additionally, drug dealers keep their accounts private and use casual image profiles to further conceal their activities. These tactics are associated with Dark Social Networking Sites, where malicious actors exploit public platforms to lure users to closed, encrypted channels for drug transactions (Al-Rawi, 2019). Addressing these underlying issues requires a holistic approach that goes beyond just safeguarding against drug-related threats on social media.
Fourthly, there is a need for clear policy frameworks to hold social media companies accountable for their role in preventing and addressing drug-related threats on their platforms. While some companies have implemented measures to detect and remove illicit drug content, there is a need for more comprehensive and effective strategies to address this issue (Kazemi et al., 2017).
In conclusion, safeguarding against drug-related threats and exploitation on social media platforms is a complex issue that requires a multi-faceted approach. Addressing the challenges and issues in this area will require collaboration across disciplines, between academia and industry, and between regulatory agencies and social media companies.
Intimate Images and Social Media
In today’s digital age, the ease of capturing and distributing images has skyrocketed, thanks to ubiquitous devices like smartphones and laptops. While this facilitates instant sharing, it also raises concerns regarding the unauthorized capturing and sharing of intimate images. This phenomenon, known as intimate image abuse, involves the non-consensual dissemination of private and sensitive images, such as nude photos or videos of sexual acts (Commission, 2022). The repercussions for victims are profound, encompassing psychological trauma, deterioration of physical well-being, and financial repercussions. This practice fundamentally violates individuals’ sexual autonomy, bodily privacy, and dignity.
The non-consensual distribution of intimate images, prevalent in relationship breakdowns and cyberbullying, involves sharing private photos or videos without permission. Initially exchanged consensually, these images may later be distributed by one partner to seek revenge, causing embarrassment, humiliation, and harm. Young people are particularly vulnerable, as images intended for one person may spread uncontrollably, leading to cyberbullying attacks. Such actions violate privacy and inflict emotional distress on the individuals depicted (Government of Canada, n.d.).
Limited data exists on the extent and nature of the phenomenon discussed, primarily originating from the United States. A recent survey of adults aged 18 to 54 revealed that 1 in 10 ex-partners have threatened to share intimate photos online, with 60% of these threats being carried out. Additionally, a 2008 online survey found that 20% of teens and 33% of young adults had engaged in “sexting” by sending nude pictures via text or email. (Government of Canada, n.d.).
It should be noted that the United Kingdom has a specific criminal offense addressing the non-consensual distribution of intimate images, prohibiting the dissemination of such content without the depicted person’s consent. Section 33 of the Criminal Justice and Courts Act 2015 (England and Wales) made it a specific criminal offense to disclose private sexual photographs or films without the consent of the individual depicted, and with the intent to cause distress. The maximum penalty is up to 2 years in prison.
However, the effectiveness of this law in preventing and punishing such offenses is a matter of contention. Recent Ministry of Justice data from 2016 to 2018 reveals a significant disparity in the prosecution and sentencing of adults versus children and young people, with 767 adults prosecuted and 651 sentenced, compared to only 18 prosecutions but 159 sentences for youth. This suggests that the law is more effective in addressing adult offenses, while cases involving younger individuals remain relatively low. Factors contributing to this discrepancy include developmental differences, the risks associated with prosecuting youth as adults, and inconsistent application of sentencing guidelines aimed at rehabilitation. Addressing these issues requires a nuanced approach that emphasizes collaboration between legal, social, and educational systems to better safeguard and rehabilitate vulnerable young offenders.
One of the criticisms of Section 33 is the requirement for the perpetrator to have intended to cause distress to the individual depicted. This threshold has been criticized as limiting the number of defendants that can be convicted, as establishing the intent to cause distress serves as an evidential barrier. This may be particularly problematic in cases involving children and young people, as it reduces the number of defendants that can be prosecuted and the protection available to victims of abuse.
Navigating the Digital Minefield: Protecting Youth Mental Health
In the 1980s and 1990s, parenting focused on physical safety, with parents having clear oversight of their children’s activities at home. Fast-forward to 2023, and while children may be physically safe in their rooms, they are exposed to an unregulated virtual world. This includes unrestricted access to potentially harmful content, excessive screen time, and the influence of predatory individuals on social media platforms. Compounding the issue, many apps are designed to be addictive, with features like Snapstreak encouraging constant engagement. Additionally, safety concerns arise as apps often share user locations and allow for disappearing messages, making parental oversight difficult (Sridhar, 2023).
The US surgeon general and the president of the American Medical Association have raised concerns about the potential mental health risks associated with social media use among young people. Surgeon General Vivek Murthy highlighted the lack of evidence proving social media’s safety for children and warned of growing evidence linking it to harm. Similarly, AMA President Jack Resneck Jr emphasized the profound risks and mental health impacts posed by these apps, suggesting that their widespread use contributes to a national youth mental health crisis. Overall, experts are increasingly alarmed by the detrimental effects of social media on young people’s mental well-being.
The latest CDC survey highlights alarming statistics: 42% of high school students reported persistent sadness, with 22% seriously contemplating suicide. Studies show a correlation between excessive social media use and doubled risks of depression and anxiety. Additionally, 46% of adolescents feel worse about their body image due to social media, with 64% exposed to hate-based content. Yet, children feel addicted to these platforms, indicating a lack of coping mechanisms. Health experts agree on the negative impacts of social media, yet solutions largely rely on parents. It’s imperative for governments to regulate social media companies, prioritizing safety and health in product design and development.
Companies prioritize keeping users on their platforms for longer periods to maximize revenue, with a significant portion of their income derived from advertising. Legislation like the “Protecting Kids on Social Media Act” bill proposed in the US aims to address concerns about the negative impact of excessive social media use on mental health, proposing measures such as setting a minimum age of 13 for social media use, requiring parental consent for users aged 13 to 18, and restricting certain algorithms for young users. Senator Brian Schatz emphasized the detrimental effects of social media on mental health, highlighting the need for public regulation to ensure companies consider the well-being of users, similar to regulations for tobacco, alcohol, or gambling.
While the above bill aims to address valid concerns about the potential negative impacts of excessive social media use on youth mental health, there are alternative perspectives and evidence that challenge the need for such stringent regulations. One argument against the bill is that it may infringe on free speech rights and parental autonomy. The Electronic Frontier Foundation (EFF), a digital rights advocacy group, has criticized the bill, stating that it “would unconstitutionally restrict the free speech rights of young people and their families”. They argue that the proposed age restrictions and parental consent requirements could limit young people’s access to valuable online resources, educational materials, and opportunities for self-expression.
While the Protecting Kids on Social Media Act bill aims to address legitimate concerns, it is essential to consider alternative perspectives and evidence. Striking a balance between protecting youth well-being and preserving individual freedoms and autonomy is a complex challenge that requires a nuanced approach informed by diverse viewpoints and ongoing research.
Conclusion
The rise of social media, as highlighted by (Kietzmann et al., 2011), has indeed been remarkable. However, its pervasive presence is now accompanied by a darker side that often dominates headlines. Instances of intellectual property leaks, fake news, privacy breaches, and election meddling are increasingly common. Despite this awareness, individuals continue to engage with social media, often oblivious to the risks involved. Children are particularly vulnerable, with the digital footprint created for them without consent and the erosion of empathy due to online interactions. Adults, too, contribute to the negative aspects, resorting to public judgment and shaming without full understanding or accountability. While social media offer unprecedented connectivity and opportunities for advocacy, they also foster a culture of harassment and judgment, challenging individuals to navigate their usage more responsibly.
References
- Al-Rawi, A. (2019). The fentanyl crisis & the dark side of social media. Telematics and Informatics, 45, 101280. https://doi.org/10.1016/j.tele.2019.101280
- Baccarella, C. V., Wagner, T. F., Kietzmann, J. H., & McCarthy, I. P. (2018). Social media? It’s serious! Understanding the Dark Side of Social Media. European Management Journal, 36(4), 431–438. https://doi.org/10.1016/j.emj.2018.07.002
- Bergman , M. (2023, December 5). Social Media and Drugs. Social Media Victims Law Center. https://socialmediavictims.org/effects-of-social-media/drugs/
- Berthon, P., Pitt, L., Kietzmann, J., & McCarthy, I. P. (2015). CGIP: Managing Consumer-Generated Intellectual Property. California Management Review, 57(4), 43–62. https://doi.org/10.1525/cmr.2015.57.4.43
- Cheung, C., Lee, Z. W. Y., & Chan, T. K. H. (2015). Self-disclosure in social networking sites. Internet Research, 25(2), 279–299. https://doi.org/10.1108/intr-09-2013-0192
- Colon-Berezin, C., Nolan, M. L., Blachman-Forshay, J., & Paone, D. (2019). Overdose Deaths Involving Fentanyl and Fentanyl Analogs — New York City, 2000–2017. MMWR. Morbidity and Mortality Weekly Report, 68(2), 37–40. https://doi.org/10.15585/mmwr.mm6802a3
- Commission, L. (2022). Taking, making and sharing intimate images without consent. Law Commission. https://lawcom.gov.uk/project/taking-making-and-sharing-intimate-images-without-consent/
- Government of Canada, D. of J. (n.d.). Department of Justice – Non-Consensual Distribution of Intimate Images – Cyberbullying and the Non-consensual Distribution of Intimate Images. Www.justice.gc.ca. Retrieved April 13, 2024, from https://www.justice.gc.ca/eng/rp-pr/other-autre/cndii-cdncii/p6.html
- Grasso, C. (2022). Whistling at the fake international roundtable “disinformation and the private sector”, session 1 [online]. video recording at 02:21. available at https://www.corporatecrime.co.uk/whistling-at-the-fake-roundtable-private-sector
- Kapoor, K. K., Tamilmani, K., Rana, N. P., Patil, P., Dwivedi, Y. K., & Nerur, S. (2018). Advances in Social Media Research: Past, Present and Future. Information Systems Frontiers, 20(3), 531–558. Springer. https://link.springer.com/article/10.1007/s10796-017-9810-y
- Kazemi, D. M., Borsari, B., Levine, M. J., & Dooley, B. (2017). Systematic review of surveillance by social media platforms for illicit drug use. Journal of Public Health, 39(4), 763–776. https://doi.org/10.1093/pubmed/fdx020
- Kietzmann, J. H., Hermkens, K., McCarthy, I. P., & S.
Disclaimer:
The views, opinions, and positions expressed within all posts are those of the author(s) alone and do not represent those of the Corporate Social Responsibility and Business Ethics Blog or its editors. The blog makes no representations as to the accuracy, completeness, and validity of any statements made on this site and will not be liable for any errors, omissions, or representations. The copyright of this content belongs to the author(s) and any liability concerning the infringement of intellectual property rights remains with the author(s).
#CorporateSocialResponsibility #CSR #CSRBlog #CCO #CorporateCrimeObservatory #CostantinoGrasso #Corporation #Business #Ethics #BusinessEthics #Crime #Justice #Equality #Accountability #Law #SocialMedia #Facebook #X #Twitter #Instagram #Meta #TikTok #DigitalAge #OnlineSafety #SocialMediaImpact #DataPrivacy #CyberSecurity #DigitalEthics #InformationSecurity #TechRegulation #DigitalTransformation #MediaLiteracy #Digital #BigTech