Author: Christoph Schmon

  • EU’s New Digital Package Proposal Promises Red Tape Cuts but Guts GDPR Privacy Rights

    EU’s New Digital Package Proposal Promises Red Tape Cuts but Guts GDPR Privacy Rights

    [ad_1]

    The European Commission (EC) is considering a “Digital Omnibus” package that would substantially rewrite EU privacy law, particularly the landmark General Data Protection Regulation (GDPR). It’s not a done deal, and it shouldn’t be.

    The GDPR is the most comprehensive model for privacy legislation around the world. While it is far from perfect and suffers from uneven enforcement, complexities and certain administrative burdens, the omnibus package is full of bad and confusing ideas that, on balance, will significantly weaken privacy protections for users in the name of cutting red tape.

    It contains at least one good idea: improving consent rules so users can automatically set consent preferences that will apply across all sites. But much as we love limiting cookie fatigue, it’s not worth the price users will pay if the rest of the proposal is adopted. The EC needs to go back to the drawing board if it wants to achieve the goal of simplifying EU regulations without gutting user privacy.

    Let’s break it down. 

     Changing What Constitutes Personal Data 

     The digital package is part of a larger Simplification Agenda to reduce compliance costs and administrative burdens for businesses, echoing the Draghi Report’s call to boost productivity and support innovation. Businesses have been complaining about GDPR red tape since its inception, and new rules are supposed to make compliance easier and turbocharge the development of AI in the EU. Simplification is framed as a precondition for firms to scale up in the EU, ironically targeting laws that were also argued to promote innovation in Europe. It might also stave off tariffs the U.S. has threatened to levy, thanks in part to heavy lobbying from Meta and tech lobbying groups.  

     The most striking proposal seeks to narrow the definition of personal data, the very basis of the GDPR. Today, information counts as personal data if someone can reasonably identify a person from it, whether directly or by combining it with other information.  

     The proposal jettisons this relatively simple test in favor of a variable one: whether data is “personal” depends on what a specific entity says it can reasonably do or is likely to do with it. This selectively restates part of a recent ruling by the EU Court of Justice but ignores the multiple other cases that have considered the issue. 

     This structural move toward entity specific standards will create massive legal and practical confusion, as the same data could be treated as personal for some actors but not for others. It also creates a path for companies to avoid established GDPR obligations via operational restructuring to separate identifiers from other information—a change in paperwork rather than in actual identifiability. What’s more, it will be up to the Commission, a political executive body, to define what counts as unidentifiable pseudonymized data for certain entities.

    Privileging AI 

    In the name of facilitating AI innovation, which often relies on large datasets in which sensitive data may residually appear, the digital package treats AI development as a “legitimate interest,” which gives AI companies a broad legal basis to process personal data, unless individuals actively object. The proposals gesture towards organisational and technical safeguards but leave companies broad discretion.  

     Another amendment would create a new exemption that allows even sensitive personal data to be used for AI systems under some circumstances. This is not a blanket permission:  “organisational and technical measures” must be taken to avoid collecting or processing such data, and proportionate efforts must be taken to remove them from AI models or training sets where they appear. However, it is unclear what will count as an appropriate or proportionate measures.

    Taken together with the new personal data test, these AI privileges mean that core data protection rights, which are meant to apply uniformly, are likely to vary in practice depending on a company’s technological and commercial goals.  

    And it means that AI systems may be allowed to process sensitive data even though non-AI systems that could pose equal or lower risks are not allowed to handle it

    A Broad Reform Beyond the GDPR

    There are additional adjustments, many of them troubling, such as changes to rules on automated-decision making (making it easier for companies to claim it’s needed for a service or contract), reduced transparency requirements (less explanation about how users’ data are used), and revised data access rights (supposed to tackle abusive requests). An extensive analysis by NGO noyb can be found here 

    Moreover, the digital package reaches well beyond the GDPR, aiming to streamline Europe’s digital regulatory rulebook, including the e-Privacy Directive, cybersecurity rules, the AI Act and the Data Act. The Commission also launched “reality checks” of other core legislation, which suggests it is eyeing other mandates.

    Browser Signals and Cookie Fatigue

    There is one proposal in the Digital Omnibus that actually could simplify something important to users: requiring online interfaces to respect automated consent signals, allowing users to automatically reject consent across all websites instead of clicking through cookie popups on each. Cookie popups are often designed with “dark patterns” that make rejecting data sharing harder than accepting it. Automated signals can address cookie banner fatigue and make it easier for people to exercise their privacy rights. 

    While this proposal is a step forward, the devil is in the details: First, the exact format of the automated consent signal will be determined by technical standards organizations where Big Tech companies have historically lobbied for standards that work in their favor. The amendments should therefore define minimum protections that cannot be weakened later. 

    Second, the provision takes the important step of requiring web browsers to make it easy for users sending this automated consent signal, so they can opt-out without installing a browser add-on. 

    However, mobile operating systems are excluded from this latter requirement, which is a significant oversight. People deserve the same privacy rights on websites and mobile apps. 

    Finally, exempting media service providers altogether creates a loophole that lets them keep using tedious or deceptive banners to get consent for data sharing. A media service’s harvesting of user information on its website to track its customers is distinct from news gathering, which should be protected. 

    A Muddled Legal Landscape

    The Commission’s use of the “Omnibus” process is meant to streamline lawmaking by bundling multiple changes. An earlier proposal kept the GDPR intact, focusing on easing the record-keeping obligation for smaller businesses—a far less contentious measure. The new digital package instead moves forward with thinner evidence than a substantive structural reform would require, violating basic Better Regulation principles, such as coherence and proportionality.

    The result is the opposite of  “simple.” The proposed delay of the high-risk requirements under the AI Act to late 2027—part of the omnibus package—illustrates this: Businesses will face a muddled legal landscape as they must comply with rules that may soon be paused and later revived again. This sounds like “complification” rather than simplification.

    The Digital Package Is Not a Done Deal

    Evaluating existing legislation is part of a sensible legislative cycle and clarifying and simplifying complex process and practices is not a bad idea. Unfortunately, the digital package misses the mark by making processes even more complex, at the expense of personal data protection. 

    Simplification doesn’t require tossing out digital rights. The EC should keep that in mind as it launches its reality check of core legislation such as the Digital Services Act and Digital Markets Act, where tidying up can too easily drift into a verschlimmbessern, the kind of well-meant fix that ends up resembling the infamous ecce homo restoration. 

    [ad_2]

    Source link

  • After Years of Controversy, the EU’s Chat Control Nears Its Final Hurdle: What to Know

    After Years of Controversy, the EU’s Chat Control Nears Its Final Hurdle: What to Know

    [ad_1]

    After a years-long battle, the European Commission’s “Chat Control” plan, which would mandate mass scanning and other encryption-breaking measures, at last codifies agreement on a position within the Council of the EU, representing EU States. The good news is that the most controversial part, the forced requirement to scan encrypted messages, is out. The bad news is there’s more to it than that.

    Chat Control has gone through several iterations since it was first introduced, with the EU Parliament backing a position that protects fundamental rights, while the Council of the EU spent many months pursuing an intrusive law-enforcement-focused approach. Many proposals earlier this year required the scanning and detection of illicit content on all services, including private messaging apps such as WhatsApp and Signal. This requirement would fundamentally break end-to-end encryption

    Thanks to the tireless efforts of digital rights groups, including European Digital Rights (EDRi), we won a significant improvement: the Council agreed on its position, which removed the requirement that forces providers to scan messages on their services. It also comes with strong language to protect encryption, which is good news for users.

    But here comes the rub: first, the Council’s position allows for “voluntary” detection, where tech platforms can scan personal messages that aren’t end-to-end encrypted. Unlike in the U.S., where there is no comprehensive federal privacy law, voluntary scanning is not technically legal in the EU, though it’s been possible through a derogation set to expire in 2026. It is unclear how this will play out over time, though we are concerned that this approach to voluntary scanning will lead to private mass-scanning of non-encrypted services and might limit the sorts of secure communication and storage services big providers offer. With limited transparency and oversight, it will be difficult to know how services approach this sort of detection. 

    With mandatory detection orders being off the table, the Council has embraced another worrying system to protect children online: risk mitigation. Providers will have to take all reasonable mitigation measures” to reduce risks on their services. This includes age verification and age assessment measures. We have written about the perils of age verification schemes and recent developments in the EU, where regulators are increasingly focusing on AV to reduce online harms.

    If secure messaging platforms like Signal or WhatsApp are required to implement age verification methods, it would fundamentally reshape what it means to use these services privately. Encrypted communication tools should be available to everyone, everywhere, of all ages, freely and without the requirement to prove their identity. As age verification has started to creep in as a mandatory risk mitigation measure under the EU’s Digital Services Act in certain situations, it could become a de facto requirement under the Chat Control proposal if the wording is left broad enough for regulators to treat it as a baseline. 

    Likewise, the Council’s position lists “voluntary activities” as a potential risk mitigation measure. Pull the thread on this and you’re left with a contradictory stance, because an activity is no longer voluntary if it forms part of a formal risk management obligation. While courts might interpret its mention in a risk assessment as an optional measure available to providers that do not use encrypted communication channels, this reading is far from certain, and the current language will, at a minimum, nudge non-encrypted services to perform voluntary scanning if they don’t want to invest in alternative risk mitigation options. It’s largely up to the provider to choose how to mitigate risks, but it’s up to enforcers to decide what is effective. Again, we’re concerned about how this will play out in practice.

    For the same reason, clear and unambiguous language is needed to prevent authorities from taking a hostile view of what is meant by “allowing encryption” if that means then expecting service providers to implement client-side scanning. We welcome the clear assurance in the text that encryption cannot be weakened or bypassed, including through any requirement to grant access to protected data, but even greater clarity would come from an explicit statement that client-side scanning cannot coexist with encryption.

    As we approach the final “trilogue” negotiations of this regulation, we urge EU lawmakers to work on a final text that fully protects users’ right to private communication and avoids intrusive age-verification mandates and risk benchmark systems that lead to surveillance in practice.

    [ad_2]

    Source link

  • Saving the Internet in Europe: Fostering Choice, Competition and the Right to Innovate

    Saving the Internet in Europe: Fostering Choice, Competition and the Right to Innovate

    [ad_1]

    This post is part four and the final part in a series of posts about EFF’s work in Europe. Read about how and why we work in Europe here.  

    EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. While our work has taken us to far corners of the globe, in recent years we have worked to expand our efforts in Europe, building up a policy team with key expertise in the region, and bringing our experience in advocacy and technology to the European fight for digital rights.   

    In this blog post series, we will introduce you to the various players involved in that fight, share how we work in Europe, and discuss how what happens in Europe can affect digital rights across the globe.  

    EFF’s Approach to Competition  

    Market concentration and monopoly power among internet companies and internet access impacts many of EFF’s issues, particularly innovation, consumer privacy, net neutrality, and platform censorship. And we have said it many times: Antitrust law and rules on market fairness are powerful tools with the potential to either cement the hold of established giants over a market even more or to challenge incumbents and spur innovation and choice that benefit users. Antitrust enforcement must hit monopolists where it hurts: ensuring that anti-competitive behaviors like abuse of dominance by multi-billion-dollar tech giants come at a price high enough to force real change.  

    The EU has recently shown that it is serious about cracking down on Big Tech companies with its full arsenal of antitrust rules. For example, in a high-stakes appeal in 2022, EU judges hit Google with a record fine of more than €4.13 billion for abusing its dominant position by locking Android users into its search engine (now pending before the Court of Justice). 

    We believe that with the right dials and knobs, clever competition rules can complement antitrust enforcement and ensure that firms that grow top heavy and sluggish are displaced by nimbler new competitors. Good competition rules should enable better alternatives that protect users’ privacy and enhance users’ technological self-determination. In the EU, this requires not only proper enforcement of existing rules but also new regulation that tackles gatekeeper’s dominance before harm is done. 

    The Digital Markets Act  

    The DMA will probably turn out to be one of the most impactful pieces of EU tech legislation in history. It’s complex but the overall approach is to place new requirements and restrictions on online “gatekeepers”: the largest tech platforms, which control access to digital markets for other businesses. These requirements are designed to break down the barriers businesses face in competing with the tech giants. 

    Let’s break down some of the DMA’s rules. If enforced robustly, the DMA will make it easier for users to switch services, install third party apps and app stores and have more power over default settings on their mobile computing devices. Users will no longer be steered into sticking with the defaults embedded in their devices and can choose, for example, their own default browser on Apple’s iOS. The DMA also tackles data collection practices: gatekeepers can no longer cross-combine user data or sign them into new services without their explicit consent and must provide them with a specific choice. A “pay or consent” advertising model as proposed by Meta will probably not cut it.  

    There are also new data access and sharing requirements that could benefit users, such as the right of end users to request effective portability of data and get access to effective tools to this end. One section of the DMA even requires gatekeepers to make their person-to-person messaging systems (like WhatsApp) interoperable with competitors’ systems on request—making it a globally unique ex ante obligation in competition regulation. At EFF, we believe that interoperable platforms can be a driver for technological self-determination and a more open internet. But even though data portability and interoperability are anti-monopoly medicine, they come with challenges: Ported data can contain sensitive information about you and interoperability poses difficult questions about security and governance, especially when it’s mandated for encrypted messaging services. Ideally, the DMA should be implemented to offer better protections for users’ privacy and security, new features, new ways of communication and better terms of service.  

    There are many more do’s and don’ts in the new fairness rulebook of the EU, such as the prohibition of platforms to favour their own products and services over those of rivals in ranking, crawling and indexing (ensuring users a real choice!), along with many other measures. All these and other requirements are to create more fairness and contestability in digital markets—a laudable objective.  If done right, the DMA presents an option for a real change for technology users—and a real threat to current abusive or unfair industry practices by Big Tech. But if implemented poorly, it could create more legal uncertainty, restrict free expression, or even legitimize the status quo. It is now up to the European Commission to bring the DMA’s promises to life. 

    Public Interest 

    As the EU’s 2024–2029 mandate is now in full swing, it will be important to not lose sight of the big picture. Fairness rules can only be truly fair if they follow a public-interest approach by empowering users, business, and society more broadly and make it easier for users to control the technology they rely on. And we cannot stop here: the EU must strive to foster a public interest internet and support open-source and decentralized alternatives. Competition and innovation are interconnected forces and the recent rise of the Fediverse makes this clear. Platforms like Mastodon and Bluesky thrive by filling gaps (and addressing frustrations) left by corporate giants, offering users more control over their experience and ultimately strengthening the resilience of the open internet. The EU should generally support user-controlled alternatives to Big Tech and use smart legislation to foster interoperability for services like social networks. In an ideal world, users are no longer locked into dominant platforms and the ad-tech industry—responsible for pervasive surveillance and other harms—is brought under control. 

    What we don’t want is a European Union that conflates fairness with protectionist industrial policies or reacts to geopolitical tensions with measures that could backfire on digital openness and fair markets. The enforcement of the DMA and new EU competition and digital rights policies must remain focused on prioritizing user rights and ensuring compliance from Big Tech—not tolerating malicious (non)compliance tactics—and upholding the rule of law rather than politicized interventions. The EU should avoid policies that could lead to a fragmented internet and must remain committed to net neutrality. It should also not hesitate to counter the concentration of power in the emerging AI stack market, where control over infrastructure and technology is increasingly in the hands of a few dominant players. 

    EFF will be watching. And we will continue to fight to save the internet in Europe, ensuring that fairness in digital markets remains rooted in choice, competition, and the right to innovate. 

    [ad_2]

    Source link

  • A Fundamental-Rights Centered EU Digital Policy: EFF’s Recommendations 2024-2029

    A Fundamental-Rights Centered EU Digital Policy: EFF’s Recommendations 2024-2029

    [ad_1]

    The European Union (EU) is a hotbed for tech regulation that often has ramifications for users globally.  The focus of our work in Europe is to ensure that EU tech policy is made responsibly and lives up to its potential to protect users everywhere. 

    As the new mandate of the European institution begins – a period where newly elected policymakers set legislative priorities for the coming years – EFF today published recommendations for a European tech policy agenda that centers on fundamental rights, empowers users, and fosters fair competition. These principles will guide our work in the EU over the next five years. Building on our previous work and success in the EU, we will continue to advocate for users and work to ensure that technology supports freedom, justice, and innovation for all people of the world. 

    Our policy recommendations cover social media platform intermediary liability, competition and interoperability, consumer protection, privacy and surveillance, and AI regulation. Here’s a sneak peek:  

    • The EU must ensure that the enforcement of platform regulation laws like the Digital Services Act and the European Media Freedom Act are centered on the fundamental rights of users in the EU and beyond.
    • The EU must create conditions of fair digital markets that foster choice innovation and fundamental rights. Achieving this requires enforcing the user-rights centered provisions of the Digital Markets Act, promoting app store freedom, user choice, and interoperability, and countering AI monopolies. 
    • The EU must adopt a privacy-first approach to fighting online harms like targeted ads and deceptive design and protect children online without reverting to harmful age verification methods that undermine the fundamental rights of all users. 
    • The EU must protect users’ rights to secure, encrypted, and private communication, protect against surveillance everywhere, stay clear of new data retention mandates, and prioritize the rights-respecting enforcement of the AI Act. 

    Read on for our full set of recommendations.

    [ad_2]

    Source link

  • EFF and Partners to EU Commissioner: Prioritize User Rights, Avoid Politicized Enforcement of DSA Rules

    EFF and Partners to EU Commissioner: Prioritize User Rights, Avoid Politicized Enforcement of DSA Rules

    [ad_1]

    EFF, Access Now, and Article 19 have written to EU Commissioner for Internal Market Thierry Breton calling on him to clarify his understanding of “systemic risks” under the Digital Services Act, and to set a high standard for the protection of fundamental rights, including freedom of expression and of information. The letter was in response to Breton’s own letter addressed to X, in which he urged the platform to take action to ensure compliance with the DSA in the context of far-right riots in the UK as well as the conversation between US presidential candidate Donald Trump and X CEO Elon Musk, which was scheduled to be, and was in fact, live-streamed hours after his letter was posted on X. 

    Clarification is necessary because Breton’s letter otherwise reads as a serious overreach of EU authority, and transforms the systemic risks-based approach into a generalized tool for censoring disfavored speech around the world. By specifically referencing the streaming event between Trump and Musk on X, Breton’s letter undermines one of the core principles of the DSA: to ensure fundamental rights protections, including freedom of expression and of information, a principle noted in Breton’s letter itself.

    The DSA Must Not Become A Tool For Global Censorship

    The letter plays into some of the worst fears of critics of the DSA that it would be used by EU regulators as a global censorship tool rather than addressing societal risks in the EU. 

    The DSA requires very large online platforms (VLOPs) to assess the systemic risks that stem from “the functioning and use made of their services in the [European] Union.” VLOPs are then also required to adopt “reasonable, proportionate and effective mitigation measures,”“tailored to the systemic risks identified.” The emphasis on systemic risks was intended, at least in part, to alleviate concerns that the DSA would be used to address individual incidents of dissemination of legal, but concerning, online speech. It was one of the limitations that civil society groups concerned with preserving a free and open internet worked hard to incorporate. 

    Breton’s letter troublingly states that he is currently monitoring “debates and interviews in the context of elections” for the “potential risks” they may pose in the EU. But such debates and interviews with electoral candidates, including the Trump-Musk interview, are clearly matters of public concern—the types of publication that are deserving of the highest levels of protection under the law. Even if one has concerns about a specific event, dissemination of information that is highly newsworthy, timely, and relevant to public discourse is not in itself a systemic risk.

    People seeking information online about elections have a protected right to view it, even through VLOPs. The dissemination of this content should not be within the EU’s enforcement focus under the threat of non-compliance procedures, and risks associated with such events should be analyzed with care. Yet Breton’s letter asserts that such publications are actually under EU scrutiny. And it is entirely unclear what proactive measures a VLOP should take to address a future speech event without resorting to general monitoring and disproportionate content restrictions. 

    Moreover, Breton’s letter fails to distinguish between “illegal” and “harmful content” and implies that the Commission favors content-specific restrictions of lawful speech. The European Commission has itself recognized that “harmful content should not be treated in the same way as illegal content.” Breton’s tweet that accompanies his letter refers to the “risk of amplification of potentially harmful content.” His letter seems to use the terms interchangeably. Importantly, this is not just a matter of differences in the legal protections for speech between the EU, the UK, the US, and other legal systems. The distinction, and the protection for legal but harmful speech, is a well-established global freedom of expression principle. 

    Lastly, we are concerned that the Commission is reaching beyond its geographic mandate.  It is not clear how such events that occur outside the EU are linked to risks and societal harm to people who live and reside within the EU, as well as the expectation of the EU Commission about what actions VLOPs must take to address these risks. The letter itself admits that the assessment is still in process, and the harm merely a possibility. EFF and partners within the DSA Human Rights Alliance have advocated for a long time that there is a great need to follow a human rights-centered enforcement of the DSA that also considers the global effects of the DSA. It is time for the Commission to prioritize their enforcement actions accordingly. 

    Read the full letter here.

    [ad_2]

    Source link