*** This article examines the 2024 Almendralejo school scandal, in which 15 minors used AI-powered nudification applications to generate and distribute sexual images of 20 schoolgirls via Telegram, as a case study for analysing corporate liability gaps in European digital platform governance. Tracing a three-stage harm pathway, from open-source AI model creation, through freely accessible nudification applications, to large-scale distribution on messaging platforms, the analysis demonstrates that existing EU regulatory instruments fail to impose adequate ex-ante obligations on the actors best positioned to prevent foreseeable harm. The AI Act’s reliance on static risk categories is shown to be fundamentally ill-suited to generative tools whose applications span “infinite domains,” while Article 50’s transparency duties fall short at the distribution phase by placing the burden on the content generator rather than the intermediary platform. The Digital Services Act’s notice-and-takedown framework proves structurally inadequate where Telegram’s architecture ensures content is redistributed before moderation can intervene. Applying the evolving corporate liability doctrine of “failing to prevent” harm, and benchmarking against the UN Guiding Principles on Business and Human Rights, the article argues that soft-law due diligence frameworks are insufficient for harms of this severity. Drawing on the UK Women and Equalities Committee’s recent recommendations for mandatory criminal sanctions and infrastructure-level enforcement, the study concludes that platform accountability must transition from reactive damage control to the “anticipatory governance” model advocated by UNESCO, in which foreseeable systemic risks are addressed before they materialise rather than remediated after irreversible harm has occurred. ***

Introduction
On 20 June 2024, 15 minors from Almendralejo, Spain, born between 2008 and 2009, were sentenced to one year of probation with educational rehabilitation. They created sexual images with the help of AI apps. A total of 20 girls between the ages of 11 and 14 years were affected by these acts. They used photos of their faces and overlaid them onto other bodies, making it look like those bodies were theirs. The sentenced minors were accused of 20 offences of child sexual abuse material (CSAM)[1] and 20 offences against moral integrity.[2]
These facts raise questions about accessibility and the ease of use of artificial intelligence to create sexual images. These concerns are not new; the debate on the ethical matters of Machine Learning and AI has been around since (at least) 1950, when Alan M. Turing asked if machines could think.[3] Years later, in 2015, Google launched the first open-source AI model, TensorFlow, accessible to the general public, which facilitated the development of tools that could generate realistic manipulated videos and images. This led commentators to consider the harmful ways these developments could be used, such as deepfakes.[4]
This article argues that platforms lack accountability[5] despite the foreseeability[6] of the harms they could cause.[7] This lack of accountability is often the result of what has been described as a “criminogenic environment,” where corporations are “empowered to shape the enforceable regulatory landscape, and evade detection of wrongdoing.” By weaponizing their public-facing social commitments, these entities can secretly lobby for a “vastly de-regulated legal environment” that favors corporate interests over societal safety.[8] Foreseeability has been clearly established by the fact that deepfake sexual abuse had already become a documented category of systemic crime years before the 2024 scandal. A report by Europol identified the “weaponisation” of synthetic media as a major societal threat, noting that as early as 2019, “96% of the fake videos [online] involved non-consensual pornography.[9]”
In this context, the question of platform accountability must be framed through the concept of corporate liability, which refers to the legal criteria for attributing responsibility to a firm for harms arising from its operations. Traditionally, many jurisdictions have adopted a narrow approach, holding a company responsible only where its “directing mind” or governing body is directly involved in wrongdoing. This model struggles to capture systemic risks generated by large corporate bodies. As a result, legal standards are increasingly moving toward a prevention-based approach, under which a company may be held liable for “failing to prevent” harm where adequate safeguards or procedures are absent or inadequate.[10]
The following analysis addresses intermediary liability and duty of care obligations under EU regulations, rather than corporate criminal liability. It examines whether the harm resulted from gaps in European directives and whether the new EU Artificial Intelligence Act and Digital Services Act will prevent such harm.
AI design failure
Deepfake technology runs on artificial intelligence and deep (machine) learning techniques.[11] Open-source AI models make their code freely available for third parties to copy, fine-tune, and use as they see fit. This means that these third parties can use them for purposes other than those originally intended, such as the creation of sexually explicit deepfakes.[12] The EPRS highlights that this is a distinctly gendered threat, noting that “women are the primary targets of deep-fakes, particularly of nudification,” and that over 90% of all deep-fake videos are pornographic, targeting victims who are “almost exclusively women.”[13] The gendered nature of this threat is compounded by the staggering prevalence of intimate image abuse in digital spaces; research indicates that as many as “1 in 10 ex-partners have threatened to share intimate photos online, with 60% of these threats being carried out.”[14] This underscores a culture of digital entitlement that AI ‘nudification’ tools now automate and scale.Some open-source AI models are implementing safety features for their foundation models, including filters for ‘unsafe’ content. But misuse continues because perpetrators know and discuss how to bypass these safety measures.[15]
The apps the minors in the Almendralejo case used are built on open-source AI models and are freely accessible on Google, social media sites, and app stores.[16] While technical expertise was once a barrier to entry, the barrier has effectively collapsed. Now this technology is user-friendly.[17] Recent empirical research has identified nearly 35,000 publicly available deepfake model variants, which have been downloaded over 15 million times. This explosion in content is driven by “Low Rank Adaptation” (LoRA) techniques, which allow a user to fine-tune a model with as few as 20 images in roughly 15 minutes using standard consumer-grade computers.[18] It is also worth noting that, as newly created images are shared repeatedly, this can become a recurring source of harm to victims.[19]
Is there still a gap in the European Union law?
The Almendralejo case is just one of many similar cases across Spain[20] and Europe, as seen recently with the Commission investigation on X’s chatbot Grok.[21] This can be attributed to the regulatory gap for generative AI tools,[22] specifically open-source models. For the purposes of this article, a gap exists when the law fails to impose ex-ante obligations on providers or AI developers to prevent the production of harmful AI-generated content, despite them being best positioned to foresee such harms. In this case, the harm pathway starts with open-source AI models, operates through easily accessible nudification applications, and distributes through Telegram. Each stage of the pathway corresponds to a different framework.
Although the Artificial Intelligence Act establishes a tiered, risk-based classification system, open-source models are generally excluded from its scope unless they are categorised as high-risk or fall under the prohibitions of Article 5.[23] While Article 50 introduces transparency requirements for deepfakes, its efficacy is limited in a distribution context; the duty to label content rests primarily with the AI provider or user generating the image, rather than the intermediary platform (like Telegram) where the harm proliferates.[24] This reliance on static risk categories is fundamentally problematic for generative tools. This is because general-purpose AI can be applied across “infinite domains” and, as a result, it is “very difficult to delineate the field of application… upfront.”[25] This creates a regulatory loophole where harmful “nudification” tools can emerge from foundation models originally deemed low risk. The systemic nature of this gap is further illustrated by the withdrawal of the proposed AI Liability Directive in 2025, which reflects the ongoing difficulty in reaching a European consensus on how to attribute responsibility for AI-related harms.[26]
Under Article 5, listed prohibited practices such as those that manipulate human behaviour, exploit vulnerabilities, or are used for social scoring by public authorities, but there is no clear reference to deepfake sexual images. Instead, it focuses more on behavioural manipulation and surveillance rather than content-based harm.[27] Article 50 addresses deepfakes primarily through transparency obligations; however, it fails to address the distribution phase, as it places the burden on the “generator” to label content rather than requiring platforms to proactively detect or block unlabeled AI-generated sexual imagery.[28]
As previously mentioned, the apps used in the Almendralejo case were accessed through Telegram.[29] This brings the Digital Services Act into scope since one of its aims is to protect users from harmful and illegal content.[30] Article 6 establishes that platforms are not liable for illegal content hosted unless they have knowledge or awareness of it and fail to remove it.[31] Article 16 states that there must be mechanisms for users to report, and that they must be easily accessible and user-friendly.[32] But reporting does not constitute the only form of awareness since it can also come from media, law enforcement notice, and internal detection. Articles 34 and 35 are directed at providers of very large online platforms and very large online search engines (VLOP, VLOSEs), which must evaluate the systematic risks they might pose, and mitigate them by taking effective measures.[33] A Platform is considered a VLOP when it has more than 45 million users around Europe.[34] Even though independent reports suggest that Telegram meets the threshold, official reports say otherwise. Therefore, the EU does not consider it a VLOP.[35]
The EU already recognises that image-based personal data can be an issue.[36] This case involves minors, which raises questions about what online minor protection is available. Under Article 9 GDPR, images of a person qualify as biometric data if processed for identification, and sexual content qualifies as data relating to a person’s sexual life. This type of data is prohibited unless an exception applies.[37] Article 6 states that the processing of this imagery must be under a lawful excuse.[38] Article 8 is about the protection of children’s data and sets out the age of consent to process personal data at 16, unless countries say otherwise, but it must never be below 13.[39] While GDPR Article 8 mandates parental consent for children under 16, its application is stymied in practice; the Almendralejo judgment reveals a critical lack of age-verification mechanisms within the apps used, creating a “dark space” where data protection rights are non-functional.[40] EU law also criminalises CSAM under Article 2 of Directive 2011/93/EU.[41] Article 5 of the same directive also criminalises acquisition, distribution, and possession of CSAM.[42] On paper, the EU also recognises privacy, data protection, dignity, and children’s rights as fundamental rights,[43] but as seen above, the focus is on individual users of platforms and AI rather than placing responsibility on developers.
Platform Design Failure: Telegram
Since Telegram played an important role in this case, the platform’s design becomes relevant. Telegram groups can have up to 200,000 members,[44] and channels can have unlimited users.[45] These channels do not have any verification mechanism, allow the forwarding of messages across channels and groups, support content duplication, identity spoofing, anonymity,[46] and the use of bots that automate the diffusion of messages and content.[47] Although Telegram is not anonymous by default, there are ways around it.[48] Some of the groups also require manual requests for access and proof of payments, which hinders reporting, external monitoring, traceability, and access from investigators and authorities.[49] The current moderation mechanisms rely on user reporting,[50] which is in accordance with the DSA, but by the time unlawful content is flagged, it may have already been saved and redistributed. This means that there is no control over who receives the content, and the traceability of diffusion is complicated.[51] These features, forwarding, duplication, and scale of groups and channels, make the harm difficult to remediate once redistributed,[52] whilst showing that potential for abuse is systemically facilitated by the platform’s architecture (e.g., unlimited channel scaling and automated bot diffusion). These features suggest that the DSA ex-post enforcement reliance has limitations.
Rebalancing Responsibility: Recommendations and Conclusion.
It is important to acknowledge the tensions within the EU on digital regulations. While imposing stricter obligations on AI and digital platforms can have many benefits for protecting individuals in these cases, it can also raise concerns regarding freedom of expression and information, which the EU seeks to protect with the DSA.[53] It also tries to protect, to an extent, the right to privacy, which states that there is no obligation to monitor activity on the platforms.[54] The EU also tries to promote innovation by avoiding imposing too many burdens on developers, as seen in the AI Act, in how they approached the regulation of open-source models.[55] Nevertheless, these considerations should not outweigh the need for a reform imposing stronger safeguards in cases of foreseeable harm.
The recent European Commission’s investigation into X’s chatbot Grok to determine if it failed to assess and mitigate risks under the DSA shows how risks are being assessed in practice.[56] This reinforces the need to clarify and strengthen the platform and developers’ accountability under the DSA and the AI Act. This regulatory hardening is mirrored in the UK, where, in early 2026, Prime Minister Keir Starmer warned that X could lose the “right to self-regulate” if it failed to control deepfake creation.[57] This accountability must transition from reactive damage control to the “anticipatory governance” advocated by UNESCO, which seeks to address potential risks before they materialise.[58]
The Almendralejo case is just one of many that demonstrate that AI developers and other platforms lack accountability. This analysis highlights the gaps in AI and digital platform governance. To prevent similar harms, the EU should consider several recommendations. These address different stages of the harm pathway identified here, including risks during the creation of AI, distribution on digital platforms, and enforcement once unlawful content is detected. The recommendations include: enhancing transparency in risk assessments in standard setting for AI models;[59] introducing regulatory sandboxes to test the technology before opening it to the market;[60] implementing mandatory verification and certification systems for AI tools to ensure that they meet safety requirements;[61] adopting a regulation that imposes proactive detection and obligatory duties to platforms instead of a reaction-based approach;[62] prevention of future violations once informed of one;[63] expanding the risk criteria of the DSA beyond VLOPs and include risk accumulation and harms.[64] These recommendations align with soft-law corporate responsibility frameworks such as the UN Guiding Principles on Business and Human Rights,[65] which establish that companies have a responsibility to conduct human rights due diligence and prevent or mitigate harms linked to their services. However, given the life-altering seriousness of deepfake abuse, while this analysis emphasises the importance of shifting the responsibility towards the corporations that enable the harms, relying on a soft-law due diligence approach may be insufficient to effectively prevent harm. Consequently, a more stringent mandatory approach may be opportune, as seen in the recent perspective offered by the UK Women and Equalities Committee. The Committee concluded that current enforcement powers are “too slow and not designed to help individuals” remove content from non-compliant, overseas sites. To solve this, the Committee recommends making the possession of such images a criminal offence. In this way, the UK Parliament argues that internet infrastructure providers can finally be compelled to block or disrupt access to non-compliant domains.[66] In the United Kingdom, this shift toward mandatory enforcement is already in motion; in January 2026, Ofcom opened a formal investigation into X (XIUC) following reports that its “Grok model… is/was being used to generate and share content that may amount to intimate image abuse.”[67]
References
- Alliance for Universal Digital Rights and Equality Now, Briefing paper: Deepfake image-based sexual abuse, tech-facilitated sexual exploitation and the law (January 2024) https://audri.org/wp-content/uploads/2024/01/EN-AUDRi-Briefing-paper-deepfake-06.pdf
- BBC News, ‘One in 5 young people in Spain report being victims of AI deepfakes with almost all reporting sexual violence online’ (9 July 2025) https://www.savethechildren.net/news/one-5-young-people-spain-report-being-victims-ai-deepfakes-almost-all-reporting-sexual
- Brione P. and Gajjar D., ‘Artificial intelligence: ethics, governance and regulation’ (7 October 2024) (UK Parliament briefing) https://perma.cc/FK3D-4YG8
- Broughton Micova S, What is the Harm in Size: Very Large Online Platforms in the Digital Services Act (CERRE Issue Paper, 19 October 2021) https://ueaeprints.uea.ac.uk/id/eprint/83031/1/211019_CERRE_IP_What_is_the_harm_in_size_FINAL.pdf
- Cantero Gamito M. and Marsden C., ‘Artificial intelligence co-regulation? The role of standards in the EU AI Act’ (2024) International Journal of Law and Information Technology
- Castelfranchi C., ‘Alan Turing’s “Computing Machinery and Intelligence”’ (2013) 32 Topoi 293
- Children’s Commissioner for England, ‘“One day this could happen to me”: Children, nudification tools and sexually explicit deepfakes’ (April 2025) https://assets.childrenscommissioner.gov.uk/wpuploads/2025/04/Children-nudification-tools-and-sexually-explicit-deepfakes-April-2025.pdf
- Chun J., Schroeder de Witt C., and Elkins K., ‘Comparative Global AI Regulation: Policy Perspectives from the EU, China and the US’ (2024) 47 Fordham International Law Journal 1
- Consejo General del Poder Judicial, ‘Imponen la medida de libertad vigilada durante un año a los 15 menores acusados de manipular y difundir imágenes de menores desnudas en Badajoz’ (9 July 2024) https://www.poderjudicial.es/cgpj/es/Poder-Judicial/Noticias-Judiciales/Imponen-la-medida-de-libertad-vigilada-durante-un-ano-a-los-15-menores-acusados-de-manipular-y-difundir-imagenes-de-menores-desnudas-en-Badajoz
- Cress L., ‘X could “lose right to self regulate”, says Starmer’ BBC News (12 January 2026) https://www.bbc.co.uk/news/articles/cq845glnvl1o
- de Villiers M., ‘Foreseeability Decoded,’ (2015) 16 Minnesota Journal of Law Science & Technology 343
- Duller Y., ‘Who Governs AI in the EU? A Breakdown of Authorities in the EU AI Act’, UNESCO, 22 December 2025, https://perma.cc/WDX2-QTT3
- European Commission, ‘AI Liability Directive’ https://www.ai-liability-directive.com
- European Commission, ‘Commission Investigates Grok and X’s Recommender Systems under the Digital Services Act’ (Press Release IP/26/203, 25 January 2026) https://perma.cc/V5QQ-4FJE
- European Commission, ‘DSA: Very large online platforms and search engines’ https://digital-strategy.ec.europa.eu/en/policies/dsa-vlops
- European Data Protection Board, Guidelines 3/2019 on processing of personal data through video devices (Version 2.0, 29 January 2020) https://www.edpb.europa.eu/sites/default/files/files/file1/edpb_guidelines_201903_video_devices.pdf
- European Parliament, ‘EU AI Act: First regulation on artificial intelligence’ (2023) https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
- European Parliament, ‘EU Digital Markets Act and Digital Services Act explained’ (14 December 2021) https://www.europarl.europa.eu/topics/en/article/20211209STO19124/eu-digital-markets-act-and-digital-services-act-explained
- European Parliament, ‘The case of Telegram and the methodology for determining VLOPs under the DSA’ (Question for written answer E-001293/2025, 27 March 2025) https://www.europarl.europa.eu/doceo/document/E-10-2025-001293_EN.html
- Europol, Facing reality? Law enforcement and the challenge of deepfakes (Observatory Report, Europol Innovation Lab 2022) https://perma.cc/NE2R-LZ3F
- Franco M., Gaggi O., and Palazzi C., ‘Characterizing Non-Consensual Intimate Image Abuse on Telegram Groups and Channels’ (2024) 4th International Workshop on Open Challenges in Online Social Networks (OASIS ’24), Poznan, Poland https://dl.acm.org/doi/epdf/10.1145/3677117.3685008
- Girich M., Levashenko A., Valamat-Zade A., and Magomedova O., ‘Trends in Regulating Online Platforms Worldwide: International Experience’ (2021) Russian Economy in 2020: Trends and Outlooks, Issue 42
- Grasso C., ‘Peaks and troughs of the English deferred prosecution agreement: the lesson learned from the DPA between the SFO and ICBC SB Plc’ (2016) (5) Journal of Business Law 388
- Grasso C. and Holden S., ‘Exploring the Interconnections Between Corporate Social Responsibility and Corporate Crime’ in Bianchin M. and Palmiter A. (eds.) The Emerging Law of Sustainable Corporations: Chronicles from a Course, a Colloquium, and a Symposium, Padova University Press, (2024) 235, https://www.padovauniversitypress.it/system/files/download-count/attachments/2025-03/9788869383335.pdf
- Harris D., ‘Deepfakes: False Pornography Is Here, and the Law Cannot Protect You’ (2019) 17 Duke Law & Technology Review 99
- Hawkins W., Mittelstadt B., and Russell C., ‘Deepfakes on Demand: The rise of accessible non-consensual deepfake image generators,’ In The 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25), June 23–26, 2025, Athens, Greece. ACM, New York, NY, USA, 1602, https://dl.acm.org/doi/epdf/10.1145/3715275.3732107
- House of Commons Women and Equalities Committee, Tackling non-consensual intimate image abuse: Fourth Report of Session 2024–25 (HC 336, 5 March 2025) https://committees.parliament.uk/publications/46899/documents/241995/default
- Huber A. R. and Ward Z., ‘Non-consensual Intimate Image Distribution: Nature, Removal, and Implications for the Online Safety Act’ (2024) European Journal of Criminology
- Internet Watch Foundation, What has changed in the AI CSAM landscape? (AI CSAM Report Update, July 2024) https://www.iwf.org.uk/media/drufozvi/iwf-ai-csam-report_update-public-jul24v12.pdf
- La Morgia M. and others, Uncovering the Dark Side of Telegram: Fakes, Clones, Scams and More (arXiv preprint, 2021) https://arxiv.org/pdf/2111.13530
- Mento C. and others, ‘Psychological Violence in Image-Based Sexual Abuse (IBSA): The Role of Psychological Traits and Social Communications—A Narrative Review’ (2025) Healthcare
- Ofcom, ‘Investigation into X Internet Unlimited Company and its compliance with duties to protect its users from illegal content and child users from harmful content,’ 12 January 2026, https://perma.cc/7RV6-T38B
- Oladipupo D., ‘Shadows of the Digital Age Protecting Vulnerable Individuals Online,’ The Corporate Social Responsibility and Business Ethics Blog (Nov 16, 2024), 4, https://corporatesocialresponsibilityblog.com/2024/11/16/shadows-digital-age
- Onyiuke T., ‘Controlling Realities: The Role of Law in Deep Fake Technology’ (2025) 31(5) Computer and Telecommunications Law Review 147
- Prainsack B. and Forgó N., ‘New AI regulation in the EU seeks to reduce risk without assessing public benefit’ (2024) 30 Nature Medicine 1235
- Ramluckan T., ‘Deepfakes: The Legal Implications’ https://pdfs.semanticscholar.org/6f93/5d299c7f4f76fad19c5be3c2219e4bef921c.pdf
- Save the Children, ‘One in 5 young people in Spain report being victims of AI deepfakes with almost all reporting sexual violence online’ (9 July 2025) https://www.savethechildren.net/news/one-5-young-people-spain-report-being-victims-ai-deepfakes-almost-all-reporting-sexual
- Schmidt F., Varese F., Larkin A. and Bucci S., ‘The Mental Health and Social Implications of Nonconsensual Sharing of Intimate Images on Youth: A Systematic Review’ (2024) 25(3) Trauma, Violence, & Abuse 2158
- SecurityHero.io, ‘2023 State of Deepfakes: Realities, Threats, and Impact’ (2023) https://www.securityhero.io/state-of-deepfakes
- Telegram, ‘FAQ,’ accessed March 30, 2026, https://perma.cc/TPC9-SFW7
- Turing A. M., ‘Computing Machinery and Intelligence’ (1950) 59 Mind 433
- United Nations, Guiding Principles on Business and Human Rights (2011) https://perma.cc/5R6H-6HPN
- Vladislav S. and Eva S., ‘How to Stay Anonymous on Telegram: Expert Tips and Tools for Enhanced Security’ (Pixelscan, 30 April 2025) https://perma.cc/R9FB-V82E
- Wang J., Regulation of Digital Media Platforms: The Case of China (2020) https://perma.cc/HZL6-4LF8
Zamfir I. and Murphy C., ‘Cyberviolence against women in the EU,’ European Parliament Research Service, Briefing, December 2024, https://www.europarl.europa.eu/RegData/etudes/BRIE/2024/767146/EPRS_BRI(2024)767146_EN.pdf
[1] Under Spanish law the offence is classified as ‘Pornografía Infantil’, sometimes translated as ‘Juvenile Pornography’, however, for clarity, this article will refer to it as Child Sexual Abuse Material (SCAM).
[2] Juzgado de Menores de Badajoz, Sentencia 86/2024, 20 June 2024 (ECLI:ES:JMEBA:2024:4).
[3] A.M. Turing, ‘Computing Machinery and Intelligence’, Mind, 59, 433-460, (1950); See also, Cristiano Castelfranchi, ‘Alan Turing’s “Computing Machinery and Intelligence”’ (2013) 32 Topoi 293.
[4] Douglas Harris, ‘Deepfakes: Falso Pornography Is Here and the Law Cannot Protect You’, (2019) 17 Duke Law & Technology Review 99-127.
[5] For the purposes of this analysis, accountability will refer to platforms obligations under EU law and potential exposure to civil liability rather than broader corporate social responsibility expectations.
[6] Foreseeability has always been a crucial element of the law of torts and its test can be summarised as “whether one can see a systematic relationship between the type of accident that the plaintiff suffered and … the defendant’s [wrongdoing].” See Meiring de Villiers, ‘Foreseeability Decoded,’ (2015) 16 Minnesota Journal of Law Science & Technology 343, 344. Specifically, For present purposes, foreseeability refers to the predictable risks arising from the misuse of AI image-generation systems.
[7] As noted by Prainsack & Forgó, the rapid pace of technological development means that legal risk assessments are often “outdated by the time their legal consequences emerge,” making it difficult for static regulations like the AI Act to capture continuously evolving risks. See Barbara Prainsack & Nikolaus Forgó, ‘New AI regulation in the EU seeks to reduce risk without assessing public benefit’ (2024) 30 Nature Medicine 1235.
[8] See Costantino Grasso & Stephen Holden, ‘Exploring the Interconnections Between Corporate Social Responsibility and Corporate Crime,’ in Bianchin M. and Palmiter A. (eds.) The Emerging Law of Sustainable Corporations: Chronicles from a Course, a Colloquium, and a Symposium, Padova University Press, (2024) 235, 241, available at https://www.padovauniversitypress.it/system/files/download-count/attachments/2025-03/9788869383335.pdf.
[9] See Europol, Facing reality? Law enforcement and the challenge of deepfakes (Observatory Report, Europol Innovation Lab 2022) at p. 11 https://perma.cc/NE2R-LZ3F.
[10] For a discussion of how this transition occurred in the UK in relation to corruption see Costantino Grasso, ‘Peaks and troughs of the English deferred prosecution agreement: the lesson learned from the DPA between the SFO and ICBC SB Plc’ (2016) (5) Journal of Business Law 388.
[11] Tochukwu Onyiuke, ‘Controlling Realities: The Role of Law in Deep Fake Technology’ (2025) 31(5) Computer and Telecommunications Law Review 147.
[12] Children’s Commissioner for England, “One day this could happen to me”: Children, nudification tools and sexually explicit deepfakes (April 2025) 27–28.
[13] Ionel Zamfir and Colin Murphy, ‘Cyberviolence against women in the EU,’ European Parliament Research Service, Briefing, De cember 2024, 4, https://www.europarl.europa.eu/RegData/etudes/BRIE/2024/767146/EPRS_BRI(2024)767146_EN.pdf.
[14] David Ireoluwatomi Oladipupo, ‘Shadows of the Digital Age Protecting Vulnerable Individuals Online,’ The Corporate Social Responsibility And Business Ethics Blog (Nov 16, 2024), 4.
[15] Internet Watch Foundation, What has changed in the AI CSAM landscape? (AI CSAM Report Update, July 2024) 15.
[16] Children’s Commissioner for England (n 10) 32–36.
[17] SecurityHero.io, ‘2023 State of Deepfakes: Realities, Threats, and Impact’ (SecurityHero.io, 2023) https://www.securityhero.io/state-of-deepfakes.
[18] See Will Hawkins, Brent Mittelstadt, and Chris Russell, ‘Deepfakes on Demand: The rise of accessible non-consensual deepfake image generators,’ In The 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25), June 23–26, 2025, Athens, Greece. ACM, New York, NY, USA, 1602, available at https://dl.acm.org/doi/epdf/10.1145/3715275.3732107.
[19] Internet Watch Foundation, What has changed in the AI CSAM landscape? (AI CSAM Report Update, July 2024) 17–18.
[20] Save the Children International, ‘One in 5 Young People in Spain Report Being Victims of AI Deepfakes With Almost All Reporting Sexual Violence Online – Save the Children Study’ https://www.savethechildren.net/news/one-5-young-people-spain-report-being-victims-ai-deepfakes-almost-all-reporting-sexual.
[21] The European Commission, ‘Commission Investigates Grok and X’s Recommender Systems under the Digital Services Act’ (6 February 2026) https://digital-strategy.ec.europa.eu/en/news/commission-investigates-grok-and-xs-recommender-systems-under-digital-services-act.
[22] Alliance for Universal Digital Rights and Equality Now, ‘Briefing Paper: Deepfake Image-Based Sexual Abuse, Tech-Facilitated Sexual Exploitation and the Law’ (2024) https://audri.org/wp-content/uploads/2024/01/EN-AUDRi-Briefing-paper-deepfake-06.pdf.
[23] European Parliament, ‘EU AI Act: First regulation on artificial intelligence’ (European Parliament, 2023) https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence.
[24] A Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) art 2(12).
[25] See Barbara Prainsack & Nikolaus Forgó, ‘New AI regulation in the EU seeks to reduce risk without assessing public benefit’ (2024) 30 Nature Medicine 1235.
[26] European Commission, AI Liability Directive https://www.ai-liability-directive.com.
[27] Artificial Intelligence Act, art 5.
[28] Artificial Intelligence Act, art 50.
[29] Juzgado de Menores de Badajoz (n 1).
[30] European Parliament, ‘EU Digital Markets Act and Digital Services Act explained’ (European Parliament, 14 December 2021) https://www.europarl.europa.eu/topics/en/article/20211209STO19124/eu-digital-markets-act-and-digital-services-act-explained.
[31] Digital Services Act, art 6.
[32] Digital Services Act, art 16.
[33] Digital Services Act, arts 34-35.
[34] European Commission, ‘DSA: Very large online platforms and search engines’ https://digital-strategy.ec.europa.eu/en/policies/dsa-vlops.
[35] European Parliament, ‘The case of Telegram and the methodology for determining VLOPs under the DSA’ (Question for written answer E-001293/2025, 27 March 2025) https://www.europarl.europa.eu/doceo/document/E-10-2025-001293_EN.html.
[36] European Data Protection Board, Guidelines 3/2019 on processing of personal data through video devices (Version 2.0, 29 January 2020) https://www.edpb.europa.eu/sites/default/files/files/file1/edpb_guidelines_201903_video_devices.pdf.
[37] Art 9, Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons regarding the processing of personal data and on the free movement of such data (General Data Protection Regulation) [2016] OJ L119/1.
[38] GDPR, art 6.
[39] GDPR, art 8.
[40] In 2025 a case was brought in the United States against an app providing the same services and with an almost identical name. However, it has not been possible to verify if it was the same used in the Almendralejo case. Therefore, the article treats them as two different apps. The complaint in the US did not have any type of age verification mechanism. See Jane Doe (a minor) v AI/Robotics Venture Strategy 3 Ltd (d/b/a ClothOff) and others, Complaint, US District Court, District of New Jersey, No 2:25-cv-16671 (filed 16 October 2025).
[41] Directive 2011/93/EU of the European Parliament and of the Council of 13 December 2011 on combating the sexual abuse and sexual exploitation of children and child pornography [2011] OJ L335/1, art 2.
[42] Directive 2011/93/EU [2011] OJ L335/1, art 5.
[43] Charter of Fundamental Rights of the European Union [2016] OJ C202/389, arts 1, 7, 8 and 24.
[44] Telegram, ‘FAQ — What makes Telegram groups cool?’ https://perma.cc/TPC9-SFW7, accessed March 30, 2026.
[45] Ibid.
[46] M La Morgia and others, Uncovering the Dark Side of Telegram: Fakes, Clones, Scams and more (arXiv preprint, 2021).
[47] Mirko Franco, Ombretta Gaggi and Claudio E Palazzi, ‘Characterizing Non-Consensual Intimate Image Abuse on Telegram Groups and Channels’ (2024) 4th International Workshop on Open Challenges in Online Social Networks (OASIS ’24), Poznan, Poland.
[48] Vladislav S and Eva S, ‘How to Stay Anonymous on Telegram: Expert Tips and Tools for Enhanced Security’ (Pixelscan, 30 April 2025) https://perma.cc/R9FB-V82E.
[49] Mirko Franco, Ombretta Gaggi and Claudio E Palazzi (n 44)
[50] M La Morgia and others (n 43)
[51] Antoinette R Huber and Zara Ward, ‘Non-consensual Intimate Image Distribution: Nature, Removal, and Implications for the Online Safety Act’ (2024) European Journal of Criminology.
[52] Carmela Mento and others, ‘Psychological Violence in Image-Based Sexual Abuse (IBSA): The Role of Psychological Traits and Social Communications—A Narrative Review’ (2025) Healthcare; Felipa Schmidt, Filippo Varese, Amanda Larkin and Sandra Bucci, ‘The Mental Health and Social Implications of Nonconsensual Sharing of Intimate Images on Youth: A Systematic Review’ (2024) 25(3) Trauma, Violence, & Abuse 2158.
[53] Digital Services Act, recitals 3 and 47.
[54] Digital Service Act, art 8.
[55] Artificial Intelligence Act, recital 102.
[56] European Commission, Commission Investigates Grok and X’s Recommender Systems under the Digital Services Act (Press Release IP/26/203, 25 January 2026) https://perma.cc/V5QQ-4FJE.
[57] By shifting the focus from individual behavior to the “platforms that host such material,” and criminalizing the very supply of “nudification” tools, the UK response signals a global departure from the failed model of corporate self-policing. See Laura Cress, ‘X could ‘lose right to self regulate’, says Starmer,’ BBC News (12 January 2026) https://www.bbc.co.uk/news/articles/cq845glnvl1o accessed 27 March 2025.
[58] See Yannic Duller, ‘Who Governs AI in the EU? A Breakdown of Authorities in the EU AI Act’ (UNESCO, 22 December 2025) https://perma.cc/WDX2-QTT3.
[59] Marta Cantero Gamito and Christopher T Marsden, ‘Artificial intelligence co-regulation? The role of standards in the EU AI Act’ (2024) International Journal of Law and Information Technology.
[60] Jon Chun, Christian Schroeder de Witt and Katherine Elkins ‘Comparative Global AI Regulation: Policy Perspectives from the EU, China and the US’ (2024) 47 Fordham International Law Journal 1.
[61] Patrick Brione & Devyani Gajjar, ‘Artificial intelligence: ethics, governance and regulation’ (7 October 2024) (UK Parliament briefing) https://perma.cc/FK3D-4YG8.
[62] Jufang Wang, Regulation of Digital Media Platforms: The Case of China (2020), https://perma.cc/HZL6-4LF8.
[63] Maria Girich, Antonina Levashenko, A Valamat-Zade and Olga Magomedova, ‘Trends in Regulating Online Platforms Worldwide: International Experience’ (2021) Russian Economy in 2020. Trends and Outlooks, Issue 42.
[64] Sally Broughton Micova, What is the Harm in Size: Very Large Online Platforms in the Digital Services Act (CERRE Issue Paper, 19 October 2021), https://ueaeprints.uea.ac.uk/id/eprint/83031/1/211019_CERRE_IP_What_is_the_harm_in_size_FINAL.pdf
[65] UN Guiding Principles on Business and Human Rights, 2011, https://perma.cc/5R6H-6HPN.
[66] House of Commons ‘Tackling non-consensual intimate image abuse,’ and Equalities Committee, Fourth Report of Session 2024–25, HC 336, https://committees.parliament.uk/publications/46899/documents/241995/default.
[67] Ofcom, ‘Investigation into X Internet Unlimited Company and its compliance with duties to protect its users from illegal content and child users from harmful content’ (12 January 2026) https://perma.cc/7RV6-T38B.
Disclaimer:
The views, opinions, and positions expressed within all posts are those of the author(s) alone and do not represent those of the Corporate Social Responsibility and Business Ethics Blog or its editors. The blog makes no representations as to the accuracy, completeness, and validity of any statements made on this site and will not be liable for any errors, omissions, or representations. The copyright of this content belongs to the author(s), and any liability concerning the infringement of intellectual property rights remains with the author(s).
#Accountability #AI #AIAct #AIGeneratedContent #AILiability #Almendralejo #AnticipatoryGovernance #ArtificialIntelligence #BusinessEthics #ChildProtection #Children #Corporate #CorporateAccountability #CorporateGovernance #CorporateLiability #CorporateSocialResponsibility #Corporations #CSAM #CSR #CSRBlog #CyberViolence #DataProtection #Deepfakes #DigitalRights #DigitalServicesAct #DSA #Ethics #EU #EuropeanUnion #GDPR #GenerativeAI #HumanRights #InformationTechnology #Justice #LegalGap #MachineLearning #Nudification #OnlineSafety #Privacy #Regulation #Telegram #VLOP #X #Sexual #SexualAbuse #SocialMedia #DigitalMedia