Author: Adam Schwartz

  • Strengthen Colorado’s AI Act | Electronic Frontier Foundation

    Strengthen Colorado’s AI Act | Electronic Frontier Foundation

    [ad_1]

    Powerful institutions are using automated decision-making against us. Landlords use it to decide who gets a home. Insurance companies use it to decide who gets health care. ICE uses it to decide who must submit to location tracking by electronic monitoring. Bosses use it to decide who gets fired, and to predict who is organizing a union or planning to quit. Bosses even use AI to assess the body language and voice tone of job candidates. And these systems often discriminate based on gender, race, and other protected statuses.

    Fortunately, workers, patients, and renters are resisting.

    In 2024, Colorado enacted a limited but crucial step forward against automated abuse: the AI Act (S.B. 24-205). We commend the labor, digital rights, and other advocates who have worked to enact and protect it. Colorado recently delayed the Act’s effective date to June 30, 2026.

    EFF looks forward to enforcement of the Colorado AI Act, opposes weakening or further delaying it, and supports strengthening it.

    What the Colorado AI Act Does

    The Colorado AI Act is a good step in the right direction. It regulates “high risk AI systems,” meaning machine-based technologies that are a “substantial factor” in deciding whether a person will have access to education, employment, loans, government services, healthcare, housing, insurance, or legal services. An AI-system is a “substantial factor” in those decisions if it assisted in the decision and could alter its outcome. The Act’s protections include transparency, due process, and impact assessments.

    The Act is a solid foundation. Still, EFF urges Colorado to strengthen it

    Transparency. The Act requires “developers” (who create high-risk AI systems) and “deployers” (who use them) to provide information to the general public and affected individuals about these systems, including their purposes, the types and sources of inputs, and efforts to mitigate known harms. Developers and deployers also must notify people if they are being subjected to these systems. Transparency protections like these can be a baseline in a comprehensive regulatory program that facilitates enforcement of other protections.

    Due process. The Act empowers people subjected to high-risk AI systems to exercise some self-help to seek a fair decision about them. A deployer must notify them of the reasons for the decision, the degree the system contributed to the decision, and the types and sources of inputs. The deployer also must provide them an opportunity to correct any incorrect inputs. And the deployer must provide them an opportunity to appeal, including with human review.

    Impact assessments. The Act requires a developer, before providing a high-risk AI system to a deployer, to disclose known or reasonably foreseeable discriminatory harms by the system, and the intended use of the AI. In turn, the Act requires a deployer to complete an annual impact assessment for each of its high-risk AI systems, including a review of whether they cause algorithmic discrimination. A deployer also must implement a risk management program that is proportionate to the nature and scope of the AI, the sensitivity of the data it processes, and more. Deployers must regularly review their risk management programs to identify and mitigate any known or reasonably foreseeable risks of algorithmic discrimination. Impact assessment regulations like these can helpfully place a proactive duty on developers and deployers to find and solve problems, as opposed to doing nothing until an individual subjected to a high-risk system comes forward to exercise their rights.

    How the Colorado AI Act Should Be Strengthened

    The Act is a solid foundation. Still, EFF urges Colorado to strengthen it, especially in its enforcement mechanisms.

    Private right of action. The Colorado AI Act grants exclusive enforcement to the state attorney general. But no regulatory agency will ever have enough resources to investigate and enforce all violations of a law, and many government agencies get “captured” by the industries they are supposed to regulate. So Colorado should amend its Act to empower ordinary people to sue the companies that violate their legal protections from high-risk AI systems. This is often called a “private right of action,” and it is the best way to ensure robust enforcement. For example, the people of Illinois and Texas on paper have similar rights to biometric privacy, but in practice the people of Illinois have far more enjoyment of this right because they can sue violators.

    Civil rights enforcement. One of the biggest problems with high-risk AI systems is that they recurringly have an unfair disparate impact against vulnerable groups, and so one of the biggest solutions will be vigorous enforcement of civil rights laws. Unfortunately, the Colorado AI Act contains a confusing “rebuttable presumption” – that is, an evidentiary thumb on the scale – that may impede such enforcement. Specifically, if a deployer or developer complies with the Act, then they get a rebuttable presumption that they complied with the Act’s requirement of “reasonable care” to protect people from algorithmic discrimination. In practice, this may make it harder for a person subjected to a high-risk AI system to prove their discrimination claim. Other civil rights laws generally do not have this kind of provision. Colorado should amend its Act to remove it.

    Next Steps

    Colorado is off to an important start. Now it should strengthen its AI Act, and should not weaken or further delay it. Other states must enact their own laws. All manner of automated decision-making systems are unfairly depriving people of jobs, health care, and more.

    EFF has long been fighting against such practices. We believe technology should improve everyone’s lives, not subject them to abuse and discrimination. We hope you will join us.

    [ad_2]

    Source link

  • Privacy Harm Is Harm | Electronic Frontier Foundation

    Privacy Harm Is Harm | Electronic Frontier Foundation

    [ad_1]

    Every day, corporations track our movements through license plate scanners, building detailed profiles of where we go, when we go there, and who we visit. When they do this to us in violation of data privacy laws, we’ve suffered a real harm—period. We shouldn’t need to prove we’ve suffered additional damage, such as physical injury or monetary loss, to have our day in court.

    That’s why EFF is proud to join an amicus brief in Mata v. Digital Recognition Network, a lawsuit by drivers against a corporation that allegedly violated a California statute that regulates Automatic License Plate Readers (ALPRs). The state trial court erroneously dismissed the case, by misinterpreting this data privacy law to require proof of extra harm beyond privacy harm. The brief was written by the ACLU of Northern California, Stanford’s Juelsgaard Clinic, and UC Law SF’s Center for Constitutional Democracy.

    The amicus brief explains:

    This case implicates critical questions about whether a California privacy law, enacted to protect people from harmful surveillance, is not just words on paper, but can be an effective tool for people to protect their rights and safety.

    California’s Constitution and laws empower people to challenge harmful surveillance at its inception without waiting for its repercussions to manifest through additional harms. A foundation for these protections is article I, section 1, which grants Californians an inalienable right to privacy.

    People in the state have long used this constitutional right to challenge the privacy-invading collection of information by private and governmental parties, not only harms that are financial, mental, or physical. Indeed, widely understood notions of privacy harm, as well as references to harm in the California Code, also demonstrate that term’s expansive meaning.

    What’s At Stake

    The defendant, Digital Recognition Network, also known as DRN Data, is a subsidiary of Motorola Solutions that provides access to a massive searchable database of ALPR data collected by private contractors. Its customers include law enforcement agencies and private companies, such as insurers, lenders, and repossession firms. DRN is the sister company to the infamous surveillance vendor Vigilant Solutions (now Motorola Solutions), and together they have provided data to ICE through a contract with Thomson Reuters.

    The consequences of weak privacy protections are already playing out across the country. This year alone, authorities in multiple states have used license plate readers to hunt for people seeking reproductive healthcare. Police officers have used these systems to stalk romantic partners and monitor political activists. ICE has tapped into these networks to track down immigrants and their families for deportation.

    Strong Privacy Laws

    This case could determine whether privacy laws have real teeth or are just words on paper. If corporations can collect your personal information with impunity—knowing that unless you can prove bodily injury or economic loss, you can’t fight back—then privacy laws lose value.

    We need strong data privacy laws. We need a private right of action so when a company violates our data privacy rights, we can sue them. We need a broad definition of “harm,” so we can sue over our lost privacy rights, without having to prove collateral injury. EFF wages this battle when writing privacy laws, when interpreting those laws, and when asserting “standing” in federal and state courts.

    The fight for privacy isn’t just about legal technicalities. It’s about preserving your right to move through the world without being constantly tracked, catalogued, and profiled by corporations looking to profit from your personal information.

    You can read the amicus brief here.

    [ad_2]

    Source link

  • Yes to California’s “No Robo Bosses Act”

    Yes to California’s “No Robo Bosses Act”

    [ad_1]

    California’s Governor should sign S.B. 7, a common-sense bill to end some of the harshest consequences of automated abuse at work. EFF is proud to join dozens of labor, digital rights, and other advocates in support of the “No Robo Bosses Act.”

    Algorithmic decision-making is a growing threat to workers. Bosses are using AI to assess the body language and voice tone of job candidates. They’re using algorithms to predict when employees are organizing a union or planning to quit. They’re automating choices about who gets fired. And these employment algorithms often discriminate based on gender, race, and other protected statuses. Fortunately, many advocates are resisting.

    What the Bill Does

    S.B. 7 is a strong step in the right direction. It addresses “automated decision systems” (ADS) across the full landscape of employment. It applies to bosses in the private and government sectors, and it protects workers who are employees and contractors. It addresses all manner of employment decisions that involve automated decisionmaking, including hiring, wages, hours, duties, promotion, discipline, and termination. It covers bosses using ADS to assist or replace a person making a decision about another person.

    Algorithmic decision-making is a growing threat to workers.

    The bill requires employers to be transparent when they rely on ADS. Before using it to make a decision about a job applicant or current worker, a boss must notify them about the use of ADS. The notice must be in a stand-alone, plain language communication. The notice to a current worker must disclose the types of decisions subject to ADS, and a boss cannot use an ADS for an undisclosed purpose. Further, the notice to a current worker must disclose information about how the ADS works, including what information goes in and how it arrives at its decision (such as whether some factors are weighed more heavily than others).

    The bill provides some due process to current workers who face discipline or termination based on the ADS. A boss cannot fire or punish a worker based solely on ADS. Before a boss does so based primarily on ADS, they must ensure a person reviews both the ADS output and other relevant information. A boss must also notify the affected worker of such use of ADS. A boss cannot use customer ratings as the only or primary input for such decisions. And every worker can obtain a copy of the most recent year of their own data that their boss might use as ADS input to punish or fire them.

    Other provisions of the bill will further protect workers. A boss must maintain an updated list of all ADS it currently uses. A boss cannot use ADS to violate the law, to infer whether a worker is a member of a protected class, or to target a worker for exercising their labor and other rights. Further, a boss cannot retaliate against a worker who exercises their rights under this new law. Local laws are not preempted, so our cities and counties are free to enact additional protections.

    Next Steps

    The “No Robo Bosses Act” is a great start. And much more is needed, because many kinds of powerful institutions are using automated decision-making against us. Landlords use it to decide who gets a home. Insurance companies use it to decide who gets health care. ICE uses it to decide who must submit to location tracking by electronic monitoring.

    EFF has long been fighting such practices. We believe technology should improve everyone’s lives, not subject them to abuse and discrimination. We hope you will join us.

    [ad_2]

    Source link

  • How to Build on Washington’s “My Health, My Data” Act

    How to Build on Washington’s “My Health, My Data” Act

    [ad_1]

    In 2023, the State of Washington enacted one of the strongest consumer data privacy laws in recent years: the “my health my data” act (HB 1155). EFF commends the civil rights, data privacy, and reproductive justice advocates who worked to pass this law.

    This post suggests ways for legislators and advocates in other states to build on the Washington law and draft one with even stronger protections. This post will separately address the law’s scope (such as who is protected); its safeguards (such as consent and minimization); and its enforcement (such as a private right of action). While the law only applies to one category of personal data – our health information – its structure could be used to protect all manner of data.

    Scope of Protection

    Authors of every consumer data privacy law must make three decisions about scope: What kind of data is protected? Whose data is protected? And who is regulated?

    The Washington law protects “consumer health data,” defined as information linkable to a consumer that identifies their “physical or mental health status.” This includes all manner of conditions and treatments, such as gender-affirming and reproductive care. While EFF’s ultimate goal is protection of all types of personal information, bills that protect at least some types can be a great start.

    The Washington law protects “consumers,” defined as all natural persons who reside in the state or had their health data collected there. It is best, as here, to protect all people. If a data privacy law protects just some people, that can incentivize a regulated entity to collect even more data, in order to distinguish protected from unprotected people. Notably, Washington’s definition of “consumers” applies only in “an individual or household context,” but not “an employment context”; thus, Washingtonians will need a different health privacy law to protect them from their snooping bosses.

    The Washington law defines a “regulated entity” as “any legal entity” that both: “conducts business” in the state or targets residents for products or services; and “determines the purpose and means” of processing consumer health data. This appears to include many non-profit groups, which is good, because such groups can harmfully process a lot of personal data.

    The law excludes government from regulation, which is not unusual for data privacy bills focused on non-governmental actors. State and local government will likely need to be regulated by another data privacy law.

    Unfortunately, the Washington law also excludes “contracted service providers when processing data on behalf of government.” A data broker or other surveillance-oriented business should not be free from regulation just because it is working for the police.

    Consent or Minimization to Collect or Share Health Data

    The most important part of Washington’s law requires either consent or minimization for a regulated entity to collect or share a consumer’s health data.

    The law has a strong definition of “consent.” It must be “a clear affirmative act that signifies a consumer’s freely given, specific, informed, opt-in, voluntary, and unambiguous agreement.” Consent cannot be obtained with “broad terms of use” or “deceptive design.”

    Absent consent, a regulated entity cannot collect or share a consumer’s health data except as necessary to provide a good or service that the consumer requested. Such rules are often called “data minimization.” Their virtue is that a consumer does not need to do anything to enjoy their statutory privacy rights; the burden is on the regulated entity to process less data.

    As to data “sale,” the Washington law requires enhanced consent (which the law calls “valid authorization”). Sale is the most dangerous form of sharing, because it incentivizes businesses to collect the most possible data in hopes of later selling it. For this reason, some laws flatly ban sale of sensitive data, like the Illinois biometric information privacy act (BIPA).

    For context, there are four ways for a bill or law to configure consent and/or minimization. Some require just consent, like BIPA’s provisions on data collection. Others require just minimization, like the federal “my body my data” bill. Still others require both, like the Massachusetts location data privacy bill. And some require either one or the other. In various times and places, EFF has supported all four configurations. “Either/or” is weakest, because it allows regulated entities to choose whether to minimize or to seek consent – a choice they will make based on their profit and not our privacy.

    Two Protections of Location Data Privacy

    Data brokers harvest our location information and sell it to anyone who will pay, including advertisers, police, and other adversaries. Legislators are stepping forward to address this threat.

    The Washington law does so in two ways. First, the “consumer health data” protected by the consent-or-minimization rule is defined to include “precise location information that could reasonably indicate a consumer’s attempt to acquire or receive health services or supplies.” In turn, “precise location” is defined as within 1,750’ of a person.

    Second, the Washington law bans a “geofence” around an “in-person health care service,” if “used” for one of three forbidden purposes (to track consumers, to collect their data, or to send them messages or ads). A “geofence” is defined as technology that uses GPS or the like “to establish a virtual boundary” of 2,000’ around the perimeter of a physical location.

    This is a good start. It is also much better than weaker rules that only apply to the immediate vicinity of sensitive locations. Such rules allow adversaries to use location data to track us as we move towards sensitive locations, observe us enter the small no-data bubble around those locations, and infer what we may have done there. On the other hand, Washington’s rules apply to sizeable areas. Also, its consent-or-minimization rule applies to all locations that could indicate pursuit of health care (not just health facilities). And its geofence rule forbids use of location data to track people.

    Still, the better approach, as in several recent bills, is to simply protect all location data. Protecting just one kind of sensitive location, like houses of worship, will leave out others, like courthouses. More fundamentally, all locations are sensitive, given the risk that others will use our location data to determine where – and with whom – we live, work, and socialize.

    More Data Privacy Protections

    Other safeguards in the Washington law deserve attention from legislators in other states:

    • Regulated entities must publish a privacy policy that discloses, for example, the categories of data collected and shared, and the purposes of collection. Regulated entities must not collect, use, or share additional categories of data, or process them for additional purposes, without consent.
    • Regulated entities must provide consumers the rights to access and delete their data.
    • Regulated entities must restrict data access to just those employees who need it, and maintain industry-standard data security

    Enforcement

    A law is only as strong as its teeth. The best way to ensure enforcement is to empower people to sue regulated entities that violate their privacy; this is often called a “private right of action.”

    The Washington law provides that its violation is “an unfair or deceptive act” under the state’s separate consumer protection act. That law, in turn, bans unfair or deceptive acts in the conduct of trade or commerce. Upon a violation of the ban, that law provides a civil action to “any person who is injured in [their] business or property,” with the remedies of injunction, actual damages, treble damages up to $25,000, and legal fees and costs. It remains to be seen how Washington’s courts will apply this old civil action to the new “my health my data” act.

    Washington legislators are demonstrating that privacy is important to public policy, but a more explicit claim would be cleaner: invasion of the fundamental human right to data privacy. Sadly, there is a nationwide debate about whether injury to data privacy, by itself, should be enough to go to court, without also proving a more tangible injury like identity theft. The best legislative models ensure full access to the courts in two ways. First, they provide: “A violation of this law regarding an individual’s data constitutes an injury to that individual, and any individual alleging a violation of this law may bring a civil action.” Second, they provide a baseline amount of damages (often called “liquidated” or “statutory” damages), because it is often difficult to prove actual damages arising from a data privacy injury.

    Finally, data privacy laws must protect people from “pay for privacy” schemes, where a business charges a higher price or delivers an inferior product if a consumer exercises their statutory data privacy rights. Such schemes will lead to a society of privacy “haves” and “have nots.”

    The Washington law has two helpful provisions. First, a regulated entity “may not unlawfully discriminate against a consumer for exercising any rights included in this chapter.” Second, there can be no data sale without a “statement” from the regulated entity to the consumer that “the provision of goods or services may not be conditioned on the consumer signing the valid authorization.”

    Some privacy bills contain more-specific language, for example along these lines: “a regulated entity cannot take an adverse action against a consumer (such as refusal to provide a good or service, charging a higher price, or providing a lower quality) because the consumer exercised their data privacy rights, unless the data at issue is essential to the good or service they requested and then only to the extent the data is essential.”

    What About Congress?

    We still desperately need comprehensive federal consumer data privacy law built on “privacy first” principles. In the meantime, states are taking the lead. The very worst thing Congress could do now is preempt states from protecting their residents’ data privacy. Advocates and legislators from across the country, seeking to take up this mantle, would benefit from looking at – and building on – Washington’s “my health my data” law.

    [ad_2]

    Source link

  • EFF to Court: Protect Our Health Data from DHS

    EFF to Court: Protect Our Health Data from DHS

    [ad_1]

    The federal government is trying to use Medicaid data to identify and deport immigrants. So EFF and our friends at EPIC and the Protect Democracy Project have filed an amicus brief asking a judge to block this dangerous violation of federal data privacy laws.

    Last month, the AP reported that the U.S. Department of Health and Human Services (HHS) had disclosed to the U.S. Department of Homeland Security (DHS) a vast trove of sensitive data obtained from states about people who obtain government-assisted health care. Medicaid is a federal program that funds health insurance for low-income people; it is partially funded and primarily managed by states. Some states, using their own funds, allow enrollment by non-citizens. HHS reportedly disclosed to DHS the Medicaid enrollee data from several of these states, including enrollee names, addresses, immigration status, and claims for health coverage.

    In response, California and 19 other states sued HHS and DHS. The states allege, among other things, that these federal agencies violated (1) the data disclosure limits in the Social Security Act, the Privacy Act, and HIPAA, and (2) the notice-and-comment requirements for rulemaking under the Administrative Procedure Act (APA).

    Our amicus brief argues that (1) disclosure of sensitive Medicaid data causes a severe privacy harm to the enrolled individuals, (2) the APA empowers federal courts to block unlawful disclosure of personal data between federal agencies, and (3) the broader public is harmed by these agencies’ lack of transparency about these radical changes in data governance.

    A new agency agreement, recently reported by the AP, allows Immigration and Customs Enforcement (ICE) to access the personal data of Medicaid enrollees held by HHS’ Centers for Medicare and Medicaid Services (CMS). The agreement states: “ICE will use the CMS data to allow ICE to receive identity and location information on aliens identified by ICE.”

    In the 1970s, in the wake of the Watergate and COINTELPRO scandals, Congress wisely enacted numerous laws to protect our data privacy from government misuse. This includes strict legal limits on disclosure of personal data within an agency, or from one agency to another. EFF sued over DOGE agents grabbing personal data from the U.S. Office of Personnel Management, and filed an amicus brief in a suit challenging ICE grabbing taxpayer data. We’ve also reported on the U.S. Department of Agriculture’s grab of food stamp data and DHS’s potential grab of postal data. And we’ve written about the dangers of consolidating all government information.

    We have data protection rules for good reason, and these latest data grabs are exactly why.

    You can read our new amicus brief here.

    [ad_2]

    Source link

  • No Postal Service Data Sharing to Deport Immigrants

    No Postal Service Data Sharing to Deport Immigrants

    [ad_1]

    The law enforcement arm of the U.S. Postal Service (USPS) recently joined a U.S. Department of Homeland Security (DHS) task force geared towards finding and deporting immigrants, according to a report from the Washington Post. Now, immigration officials want two sets of data from the U.S. Postal Inspection Service (USPIS). First, they want access to what the Post describes as the agency’s “broad surveillance systems, including Postal Service online account data, package- and mail-tracking information, credit card data and financial material and IP addresses.” Second, they want “mail covers,” meaning “photographs of the outside of envelopes and packages.”

    Both proposals are alarming. The U.S. mail is a vital, constitutionally established system of communication and commerce that should not be distorted into infrastructure for dragnet surveillance. Immigrants have a human right to data privacy. And new systems of surveilling immigrants will inevitably expand to cover all people living in our country.

    USPS Surveillance Systems

    Mail is a necessary service in our society. Every day, the agency delivers 318 million letters, hosts 7 million visitors to its website, issues 209,000 money orders, and processes 93,000 address changes.

    To obtain these necessary services, we often must provide some of our personal data to the USPS. According to the USPS’ Privacy Policy: “The Postal Service collects personal information from you and from your transactions with us.” It states that this can include “your name, email, mailing and/or business address, phone numbers, or other information that identifies you personally.” If you visit the USPS’s website, they “automatically collect and store” your IP address, the date and time of your visit, the pages you visited, and more. Also: “We occasionally collect data about you from financial entities to perform verification services and from commercial sources.”

    The USPS should not collect, store, disclose, or use our data except as strictly necessary to provide us the services we request. This is often called “data minimization.” Among other things, in the words of a seminal 1973 report from the U.S. government: “There must be a way for an individual to prevent information about him that was obtained for one purpose from being used or made available for other purposes without [their] consent.” Here, the USPS should not divert customer data, collected for the purpose of customer service, to the new purpose of surveilling immigrants.

    The USPS is subject to the federal Privacy Act of 1974, a watershed anti-surveillance statute. As the USPS acknowledges: “the Privacy Act applies when we use your personal information to know who you are and to interact with you.” Among other things, the Act limits how an agency may disclose a person’s records. (Sound familiar? EFF has a Privacy Act lawsuit against DOGE and the Office of Personnel Management.) While the Act only applies to citizens and lawful permanent residents, that will include many people who send mail to or receive mail from other immigrants. If USPS were to assert the “law enforcement” exemption from the Privacy Act’s non-disclosure rule, the agency would need to show (among other things) a written request for “the particular portion desired” of “the record.” It is unclear how dragnet surveillance like that reported by the Washington Post could satisfy this standard.

    USPS Mail Covers

    From 2015 to 2023, according to another report from the Washington Post, the USPS received more than 60,000 requests for “mail cover” information from federal, state, and local law enforcement. Each request could include days or weeks of information about the cover of mail sent to or from a person or address. The USPS approved 97% of these requests, leading to postal inspectors recording the covers of more than 312,000 letters and packages.

    In 2023, a bipartisan group of eight U.S. Senators (led by Sen. Wyden and Sen. Paul) raised the alarm about this mass surveillance program:

    While mail covers do not reveal the contents of correspondence, they can reveal deeply personal information about Americans’ political leanings, religious beliefs, or causes they support. Consequently, surveillance of this information does not just threaten Americans’ privacy, but their First Amendment rights to freely associate with political or religious organizations or peacefully assemble without the government watching.

    The Senators called on the USPIS to “only conduct mail covers when a federal judge has approved this surveillance,” except in emergencies. We agree that, at minimum, a warrant based on probable cause should be required.

    The USPS operates other dragnet surveillance programs. Its Mail Isolation Control and Tracking Program photographs the exterior of all mail, and it has been used for criminal investigations. The USPIS’s Internet Covert Operations Program (iCOP) conducts social media surveillance to identify protest activity. (Sound familiar? EFF has a FOIA lawsuit about iCOP.)

    This is just the latest of many recent attacks on the data privacy of immigrants. Now is the time to restrain USPIS’s dragnet surveillance programs—not to massively expand them to snoop on immigrants. If this scheme goes into effect, it is only a matter of time before such USPIS spying is expanded against other vulnerable groups, such as protesters or people crossing state lines for reproductive or gender affirming health care. And then against everyone.

    [ad_2]

    Source link

  • Our Privacy Act Lawsuit Against DOGE and OPM: Why a Judge Let It Move Forward

    Our Privacy Act Lawsuit Against DOGE and OPM: Why a Judge Let It Move Forward

    [ad_1]

    Last week, a federal judge rejected the government’s motion to dismiss our Privacy Act lawsuit against the U.S. Office of Personnel Management (OPM) and Elon Musk’s “Department of Government Efficiency” (DOGE). OPM is disclosing to DOGE agents the highly sensitive personal information of tens of millions of federal employees, retirees, and job applicants. This disclosure violates the federal Privacy Act, a watershed law that tightly limits how the federal government can use our personal information.

    We represent two unions of federal employees: the AFGE and the AALJ. Our co-counsel are Lex Lumina LLP, State Democracy Defenders Fund, and The Chandra Law Firm LLC.

    We’ve already explained why the new ruling is a big deal, but let’s take a deeper dive into the Court’s reasoning.

    Plaintiffs have standing

    A plaintiff must show they have “standing” to bring their claim. Article III of the U.S. Constitution empowers courts to decide “cases” and “controversies.” Courts have long held this requires the plaintiff to show an “injury in fact” that is, among other things, “concrete.” In recent years, two Supreme Court decisions – Spokeo v. Robins (2016) and TransUnion v. Ramirez (2021) – addressed when an “intangible” injury, such as invasion of data privacy, is sufficiently concrete. They ruled that such injury must have “a close relationship to a harm traditionally recognized as providing a basis for a lawsuit in American courts.”

    In our case, the Court held that our clients passed this test: “The complaint alleges concrete harms analogous to intrusion upon seclusion.” That is one of the common law privacy torts, long recognized in U.S. law. According to the Restatement of Torts, it occurs when a person “intrudes” on the “seclusion of another” in a manner “highly offensive to a reasonable person.”

    The Court reasoned that the records at issue here “contain information about the deeply private affairs of the plaintiffs,” including “social security numbers, health history, financial disclosures, and information about family members.” The court also emphasized plaintiffs’ allegation that these records were “disclosed to DOGE agents in a rushed and insecure manner,” including “administrative access, enabling them to alter OPM records and obscure their own access to those records.”

    The Court rejected defendants’ argument that our clients supposedly pled “only that DOGE agents were granted access to OPM’s data system,” and not also that “the DOGE agents in fact used that access to examine OPM records.” As a factual matter, plaintiffs in fact pled that “DOGE agents actually exploited their access to review, possess, and use OPM records.”

    As a legal matter, such use is not required: “Exposure of the plaintiff’s personally identifiable information to unauthorized third parties, without further use or disclosure, is analogous to harm cognizable under the common law right to privacy.” So ruling, the Court observed: “at least four federal courts have found that the plaintiffs before them had made a sufficient showing of concrete injury, as analogous to common law privacy torts, when agencies granted DOGE agents access to repositories of plaintiffs’ personal information.”

    To have standing, a plaintiff must also show that their “injury in fact” is “actual or imminent.” The Court held that our clients passed this test, too. It ruled that plaintiffs adequately alleged an actual injury: “ongoing unauthorized access by the DOGE agents to the plaintiffs’ data.” It also ruled that plaintiffs adequately alleged a separate, imminent injury: OPM’s disclosure to DOGE “has made the OPM data more vulnerable to hacking, identity theft, and other activities that are substantially harmful to the plaintiffs.” The Court emphasized the allegations of “sweeping and uncontrolled access to DOGE agents who were not properly vetted or trained,” as well as the notorious 2015 OPM data breach.

    Finally, the Court held that our clients sufficiently alleged the remaining two elements of standing: that defendants caused plaintiffs’ injuries, and that an injunction would redress them.

    Plaintiffs may proceed on their Privacy Act claims

    The Court held: “The plaintiffs have plausibly alleged violations of two provisions of the Privacy Act: 5 U.S.C. § 552a(b), which prohibits certain disclosures of records, and 5 U.S.C. § 552a(e)(10), which imposes a duty to establish appropriate safeguards and ensure security and confidentiality of records.” The Court cited two other judges who had recently “found a likelihood that plaintiffs will succeed” in their wrongful disclosure claims.

    Reprising their failed standing arguments, the government argued that to plead a violation of the Privacy Act’s no-disclosure rule, our clients must allege “not just transmission to another person but also review of the records by that individual.” Again, the Court rejected this argument for two independent reasons. Factually, “the complaint amply pleads that DOGE agents viewed, possessed, and used the OPM records.” Legally, “the defendants misconstrue the term ‘disclose.’” The Court looked to the OPM’s own regulations, which define the term to include “providing personal review of a record,” and an earlier appellate court opinion, interpreting the term to include “virtually all instances [of] an agency’s unauthorized transmission of a protected record.”

    Next, the government asserted an exception from the Privacy Act’s no-disclosure rule, for disclosure “to those officers and employees of the agency which maintains the record who have a need for the record in the performance of their duties.” The Court observed that our clients disputed this exception on two independent grounds: “both because [the disclosures] were made to DOGE agents who were not officers or employees of OPM and because, even if the DOGE agents were employees of OPM, they did not have a need for those records in the performance of any lawful duty.” On both grounds, the plaintiffs’ allegations sufficed.

    Plaintiffs may seek to enjoin Privacy Act violations

    The Court ruled that our clients may seek injunctive and declaratory relief against the alleged Privacy Act violations, by means of the Administrative Procedure Act (APA), though not the Privacy Act itself. This is a win: What ultimately matters is the availability of relief, not the particular path to that relief.

    As discussed above, plaintiffs have two claims that the government violated the Privacy Act: unlawful disclosures and unlawful cybersecurity failures. Plaintiffs also have an APA claim of agency action “not in accordance with law,” which refers back to these two Privacy Act violations.

    To be subject to APA judicial review, the challenged agency action must be “final.” The Court found finality: “The complaint plausibly alleges that actions by OPM were not representative of its ordinary day-to-day operations but were, in sharp contrast to its normal procedures, illegal, rushed, and dangerous.”

    Another requirement for APA judicial review is the absence of an “other adequate remedy.” The Court interpreted the Privacy Act to not allow the injunction our clients seek, but then ruled: “As a result, the plaintiffs have no adequate recourse under the Privacy Act and may pursue their request for injunctive relief under the APA.” The Court further wrote:

    The defendants’ Kafkaesque argument to the contrary would deprive the plaintiffs of any recourse under the law. They contend that the plaintiffs have no right to any injunctive relief – neither under the Privacy Act nor under the APA. … This argument promptly falls apart under examination.

    Plaintiffs may proceed on two more claims

    The Court allowed our clients to move forward on their two other claims.

    They may proceed on their claim that the government violated the APA by acting in an “arbitrary and capricious” manner. The Court reasoned: “The complaint alleges that OPM rushed the onboarding process, omitted crucial security practices, and thereby placed the security of OPM records at grave risk.”

    Finally, our clients may proceed on their claim that DOGE acted “ultra vires,” meaning outside of its legal power, when it accessed OPM records. The Court reasoned: “The complaint adequately pleads that DOGE Defendants plainly and openly crossed a congressionally drawn line in the sand.”

    Next steps

    Congress passed the Privacy Act following the Watergate and COINTELPRO scandals to restore trust in government and prevent a future President from creating another “enemies list.” Congress found that the federal government’s increasing use of databases full of personal records “greatly magnified the harm to individual privacy,” and so it tightly regulated how agencies may use these databases.

    The ongoing DOGE data grab may be the worst violation of the Privacy Act since its enactment in 1974. So it is great news that a judge has denied the government’s motion to dismiss our lawsuit. Now we will move forward to prove our case.

    [ad_2]

    Source link

  • Face Scans to Estimate Our Age: Harmful and Creepy AF

    Face Scans to Estimate Our Age: Harmful and Creepy AF

    [ad_1]

    Government must stop restricting website access with laws requiring age verification.

    Some advocates of these censorship schemes argue we can nerd our way out of the many harms they cause to speech, equity, privacy, and infosec. Their silver bullet? “Age estimation” technology that scans our faces, applies an algorithm, and guesses how old we are – before letting us access online content and opportunities to communicate with others. But when confronted with age estimation face scans, many people will refrain from accessing restricted websites, even when they have a legal right to use them. Why?

    Because quite simply, age estimation face scans are creepy AF – and harmful. First, age estimation is inaccurate and discriminatory. Second, its underlying technology can be used to try to estimate our other demographics, like ethnicity and gender, as well as our names. Third, law enforcement wants to use its underlying technology to guess our emotions and honesty, which in the hands of jumpy officers is likely to endanger innocent people. Fourth, age estimation face scans create privacy and infosec threats for the people scanned. In short, government should be restraining this hazardous technology, not normalizing it through age verification mandates.

    Error and discrimination

    Age estimation is often inaccurate. It’s in the name: age estimation. That means these face scans will regularly mistake adults for adolescents, and wrongfully deny them access to restricted websites. By the way, it will also sometimes mistake adolescents for adults.

    Age estimation also is discriminatory. Studies show face scans are more likely to err in estimating the age of people of color and women. Which means that as a tool of age verification, these face scans will have an unfair disparate impact.

    Estimating our identity and demographics

    Age estimation is a tech sibling of face identification and the estimation of other demographics. To users, all face scans look the same and we shouldn’t allow them to become a normal part of the internet. When we submit to a face scan to estimate our age, a less scrupulous company could flip a switch and use the same face scan, plus a slightly different algorithm, to guess our name or other demographics.

    Some companies are in both the age estimation business and the face identification business.

    Other developers claim they can use age estimation’s underlying technology – application of an algorithm to a face scan – to estimate our gender (like these venders) and our ethnicity (like these venders). But these scans are likely to misidentify the many people whose faces do not conform to gender and ethnic averages (such as transgender people). Worse, powerful institutions can harm people with this technology. China uses face scans to identify ethnic Uyghurs. Transphobic legislators may try to use them to enforce bathroom bans. For this reason, advocates have sought to prohibit gender estimation face scans.

    Estimating our emotions and honesty

    Developers claim they can use face estimation’s underlying technology to estimate our emotions (like these venders). But this will always have a high error rate, because people express emotions differently, based on culture, temperament, and neurodivergence. Worse, researchers are trying to use face scans to estimate deception, and even criminality. Mind-reading technologies have a long and dubious history, from phrenology to polygraphs.

    Unfortunately, powerful institutions may believe the hype. In 2008, the U.S. Department of Homeland Security disclosed its efforts to use “image analysis” of “facial features” (among other biometrics) to identify “malintent” of people being screened. Other policing agencies are using algorithms to analyze emotions and deception.

    When police technology erroneously identifies a civilian as a threat, many officers overreact. For example, ALPR errors recurringly prompt police officers to draw guns on innocent drivers. Some government agencies now advise drivers to keep their hands on the steering wheel during a traffic stop, to reduce the risk that the driver’s movements will frighten the officer. Soon such agencies may be advising drivers not to roll their eyes, because the officer’s smart glasses could misinterpret that facial expression as anger or deception.

    Privacy and infosec

    The government should not be forcing tech companies to collect even more personal data from users. Companies already collect too much data and have proved they cannot be trusted to protect it.

    Age verification face scans create new threats to our privacy and information security. These systems collect a scan of our face and guess our age. A poorly designed system might store this personal data, and even correlate it to the online content that we look at. In the hands of an adversary, and cross-referenced to other readily available information, this information can expose intimate details about us. Our faces are unique, immutable, and constantly on display – creating risk of biometric tracking across innumerable virtual and IRL contexts. Last year, hackers breached an age verification company (among many other companies).

    Of course, there are better and worse ways to design a technology. Some privacy and infosec risks might be reduced, for example, by conducting face scans on-device instead of in-cloud, or by deleting everything immediately after a visitor passes the age test. But lower-risk does not mean zero-risk. Clever hackers might find ways to breach even well-designed systems, companies might suddenly change their systems to make them less privacy-protective (perhaps at the urging of government), and employees and contractors might abuse their special access. Numerous states are mandating age verification with varying rules for how to do so; numerous websites are subject to these mandates; and numerous vendors are selling face scanning services. Inevitably, many of these websites and services will fail to maintain the most privacy-preserving systems, because of carelessness or greed.

    Also, face scanning algorithms are often trained on data that was collected using questionable privacy methods—whether it be from users with murky-consent or non-users. The government data sets used to test biometric algorithms sometimes come from prisoners and immigrants.

    Most significant here, when most people arrive at most age verification checkpoints, they will have no idea whether the face scan system has minimized the privacy and infosec risks. So many visitors will turn away, and forego the content and conversations available on restricted website.

    Next steps

    Algorithmic face scans are dangerous, whether used to estimate our age, our other demographics, our name, our emotions, or our honesty. Thus, EFF supports a ban on government use of this technology, and strict regulation (including consent and minimization) for corporate use.

    At a minimum, government must stop coercing websites into using face scans, as a means of complying with censorious age verification mandates. Age estimation does not eliminate the privacy and security issues that plague all age verification systems. And these face scans cause many people to refrain from accessing websites they have a legal right to access. Because face scans are creepy AF.

    [ad_2]

    Source link

  • The Human Toll of ALPR Errors

    The Human Toll of ALPR Errors

    [ad_1]

    This post was written by Gowri Nayar, an EFF legal intern.

    Imagine driving to get your nails done with your family and all of a sudden, you are pulled over by police officers for allegedly driving a stolen car. You are dragged out of the car and detained at gun point. So are your daughter, sister, and nieces. The police handcuff your family, even the children, and force everyone to lie face-down on the pavement, before eventually realizing that they made a mistake. This happened to Brittney Gilliam and her family on a warm Sunday in Aurora, Colorado, in August 2020.

    And the error? The police officers who pulled them over were relying on information generated by automated license plate readers (ALPRs). These are high-speed, computer-controlled camera systems that automatically capture all license plate numbers that come into view, upload them to a central server, and compare them to a “hot list” of vehicles sought by police. The ALPR system told the police that Gilliam’s car had the same license plate number as a stolen vehicle. But the stolen vehicle was a motorcycle with Montana plates, while Gilliam’s vehicle was an SUV with Colorado plates.

    Likewise, Denise Green had a frightening encounter with San Francisco police officers late one night in March of 2009. She had just dropped her sister off at a BART train station, when officers pulled her over because their ALPR indicated that she was driving a stolen vehicle. Multiple officers ordered her to exit her vehicle, at gun point, and kneel on the ground as she was handcuffed. It wasn’t until roughly 20 minutes later that the officers realized they had made an error and let her go.

    Turns out that the ALPR had misread a ‘3’ as a ‘7’ on Green’s license plate. But what is even more egregious is that none of the officers bothered to double-check the ALPR tip before acting on it.

    In both of these dangerous episodes, the motorists were Black.  ALPR technology can exacerbate our already discriminatory policing system, among other reasons because too many police officers react recklessly to information provided by these readers.

    Wrongful detentions like these happen all over the country. In Atherton, California, police officers pulled over Jason Burkleo on his way to work, on suspicion of driving a stolen vehicle. They ordered him at gun point to lie on his stomach to be handcuffed, only to later realize that their license plate reader had misread an ‘H’ for an ‘M’. In Espanola, New Mexico, law enforcement officials detained Jaclynn Gonzales at gun point and placed her 12 year-old sister in the back of a patrol vehicle, before discovering that the reader had mistaken a ‘2’ for a ‘7’ on their license plates. One study found that ALPRs misread the state of 1-in-10 plates (not counting other reading errors).

    Other wrongful stops result from police being negligent in maintaining ALPR databases. Contra Costa sheriff’s deputies detained Brian Hofer and his brother on Thanksgiving day in 2019, after an ALPR indicated his car was stolen. But the car had already been recovered. Police had failed to update the ALPR database to take this car off the “hot list” of stolen vehicles for officers to recover.

    Police over-reliance on ALPR systems is also a problem. Detroit police knew that the vehicle used in a shooting was a Dodge Charger. Officers then used ALPR cameras to find the license plate numbers of all Dodge Chargers in the area around the time. One such car, observed fully two miles away from the shooting, was owned by Isoke Robinson.  Police arrived at her house and handcuffed her, placed her 2-year old son in the back of their patrol car, and impounded her car for three weeks. None of the officers even bothered to check her car’s fog lights, though the vehicle used for the  shooting had a missing fog light.

    Officers have also abused ALPR databases to obtain information for their own personal gain, for example, to stalk an ex-wife. Sadly, officer abuse of police databases is a recurring problem.

    Many people subjected to wrongful ALPR detentions are filing and winning lawsuits. The city of Aurora settled Brittney Gilliam’s lawsuit for $1.9 million. In Denise Green’s case, the city of San Francisco paid $495,000 for her seizure at gunpoint, constitutional injury, and severe emotional distress. Brian Hofer received a $49,500 settlement.

    While the financial costs of such ALPR wrongful detentions are high, the social costs are much higher. Far from making our communities safer, ALPR systems repeatedly endanger the physical safety of innocent people subjected to wrongful detention by gun-wielding officers. They lead to more surveillance, more negligent law enforcement actions, and an environment of suspicion and fear.

    Since 2012, EFF has been resisting the safety, privacy, and other threats of ALPR technology through public records requests, litigation, and legislative advocacy. You can learn more at our Street-Level Surveillance site.

    [ad_2]

    Source link

  • Court to California: Try a Privacy Law, Not Online Censorship

    Court to California: Try a Privacy Law, Not Online Censorship

    [ad_1]

    In a victory for free speech and privacy, a federal appellate court confirmed last week that parts of the California Age-Appropriate Design Code Act likely violate the First Amendment, and that other parts require further review by the lower court.

    The U.S. Court of Appeals for the Ninth Circuit correctly rejected rules requiring online businesses to opine on whether the content they host is “harmful” to children, and then to mitigate such harms. EFF and CDT filed a friend-of-the-court brief in the case earlier this year arguing for this point.

    The court also provided a helpful roadmap to legislatures for how to write privacy first laws that can survive constitutional challenges. However, the court missed an opportunity to strike down the Act’s age-verification provision. We will continue to argue, in this case and others, that this provision violates the First Amendment rights of children and adults.

    The Act, the rulings, and our amicus brief

    In 2022, California enacted its Age-Appropriate Design Code Act (AADC). Three of the law’s provisions are crucial for understanding the court’s ruling.

    1. The Act requires an online business to write a “Data Protection Impact Assessment” for each of its features that children are likely to access. It must also address whether the feature’s design could, among other things, “expos[e] children to harmful, or potentially harmful, content.” Then the business must create a “plan to mitigate” that risk.
    1. The Act requires online businesses to follow enumerated data privacy rules specific to children. These include data minimization, and limits on processing precise geolocation data.
    1. The Act requires online businesses to “estimate the age of child users,” to an extent proportionate to the risks arising from the business’s data practices, or to apply child data privacy rules to all consumers.

    In 2023, a federal district court blocked the law, ruling that it likely violates the First Amendment. The state appealed.

    EFF’s brief in support of the district court’s ruling argued that the Act’s age-verification provision and vague “harmful” standard are unconstitutional; that these provisions cannot be severed from the rest of the Act; and thus that the entire Act should be struck down. We conditionally argued that if the court rejected our initial severability argument, privacy principles in the Act could survive the reduced judicial scrutiny applied to such laws and still safeguard peoples personal information. This is especially true given the government’s many substantial interests in protecting data privacy.

    The Ninth Circuit affirmed the preliminary injunction as to the Act’s Impact Assessment provisions, explaining that they likely violate the First Amendment on their face. The appeals court vacated the preliminary injunction as to the Act’s other provisions, reasoning that the lower court had not applied the correct legal tests. The appeals court sent the case back to the lower court to do so.

    Good news: No online censorship

    The Ninth Circuit’s decision to prevent enforcement of the AADC’s impact assessments on First Amendment grounds is a victory for internet users of all ages because it ensures everyone can continue to access and disseminate lawful speech online.

    The AADC’s central provisions would have required a diverse array of online services—from social media to news sites—to review the content on their sites and consider whether children might view or receive harmful information. EFF argued that this provision imposed content-based restrictions on what speech services could host online and was so vague that it could reach lawful speech that is upsetting, including news about current events.

    The Ninth Circuit agreed with EFF that the AADC’s “harmful to minors” standard was vague and likely violated the First Amendment for several reasons, including because it “deputizes covered businesses into serving as censors for the State.”

    The court ruled that these AADC censorship provisions were subject to the highest form of First Amendment scrutiny because they restricted content online, a point EFF argued. The court rejected California’s argument that the provisions should be subjected to reduced scrutiny under the First Amendment because they sought to regulate commercial transactions.

    “There should be no doubt that the speech children might encounter online while using covered businesses’ services is not mere commercial speech,” the court wrote.

    Finally, the court ruled that the AADC’s censorship provisions likely failed under the First Amendment because they are not narrowly tailored and California has less speech-restrictive ways to protect children online.

    EFF is pleased that the court saw AADC’s impact assessment requirements for the speech restrictions that they are. With those provisions preliminarily enjoined, everyone can continue to access important, lawful speech online.

    More good news: A roadmap for privacy-first laws

    The appeals court did not rule on whether the Act’s data privacy provisions could survive First Amendment review. Instead, it directed the lower court in the first instance to apply the correct tests.

    In doing so, the appeals court provided guideposts for how legislatures can write data privacy laws that survive First Amendment review. Spoiler alert: enact a “privacy first” law, without unlawful censorship provisions.

    Dark patterns. Some privacy laws prohibit user interfaces that have the intent or substantial effect of impairing autonomy and choice. The appeals court reversed the preliminary injunction against the Act’s dark patterns provision, because it is unclear whether dark patterns are even protected speech, and if so, what level of scrutiny they would face.

    Clarity. Some privacy laws require businesses to use clear language in their published privacy policies. The appeals court reversed the preliminary injunction against the Act’s clarity provision, because there wasn’t enough evidence to say whether the provision would run afoul of the First Amendment. Indeed, “many” applications will involve “purely factual and non-controversial” speech that could survive review.

    Transparency. Some privacy laws require businesses to disclose information about their data processing practices. In rejecting the Act’s Impact Assessments, the appeals court rejected an analogy to the California Consumer Privacy Act’s unproblematic requirement that large data processors annually report metrics about consumer requests to access, correct, and delete their data. Likewise, the court reserved judgment on the constitutionality of two of the Act’s own “more limited” reporting requirements, which did not require businesses to opine on whether third-party content is “harmful” to children.

    Social media. Many privacy laws apply to social media companies. While the EFF is second-to-none in defending the First Amendment right to moderate content, we nonetheless welcome the appeals court’s rejection of the lower court’s “speculat[ion]” that the Act’s privacy provisions “would ultimately curtail the editorial decisions of social media companies.” Some right-to-curate allegations against privacy laws might best be resolved with “as-applied claims” in specific contexts, instead of on their face.

    Ninth Circuit punts on the AADC’s age-verification provision

    The appellate court left open an important issue for the trial court to take up: whether the AADC’s age-verification provision violates the First Amendment rights of adults and children by blocking them from lawful speech, frustrating their ability to remain anonymous online, and chilling their speech to avoid danger of losing their online privacy.

    EFF also argued in our Ninth Circuit brief that the AADC’s age-verification provision was similar to many other laws that courts have repeatedly found to violate the First Amendment.

    The Ninth Circuit missed a great opportunity to confirm that the AADC’s age-verification provision violated the First Amendment. The court didn’t pass judgment on the provision, but rather ruled that the district court had failed to adequately assess the provision to determine whether it violated the First Amendment on its face.

    As EFF’s brief argued, the AADC’s age-estimation provision is pernicious because it restricts everyone’s access to lawful speech online, by requiring adults to show proof that they are old enough to access lawful content the AADC deems harmful.

    We look forward to the district court recognizing the constitutional flaws of the AADC’s age-verification provision once the issue is back before it.

    [ad_2]

    Source link