Protecting Students from Faulty Software and Legislation: 2023 Year in Review

Business

Lawmakers, schools districts, educational technology companies and others keep rolling out legislation and software that threatens students’ privacy, free speech, and access to social media, in the name of “protecting” children. At EFF, we fought back against this overreach and demand accountability and transparency.

Bad bills and invasive monitoring systems, though sometimes well-meaning, hurt students rather than protect them from the perceived dangers of the internet and social media. We saw many efforts to bar young people, and students, from digital spaces, censor what they are allowed to see and share online, and monitor and control when and how they can do it. This makes it increasingly difficult for them to access information about everything from gun violence and drug abuse to politics and LGBTQ+ topics, all because some software or elected official considers these topics “harmful.”

In response, we doubled down on exposing faulty surveillance software, long a problem in many schools across the country. We launched a new project called the Red Flag Machine, an interactive quiz and report demonstrating the absurd inefficiency—and potential dangers—of student surveillance software that schools across the country use and that routinely invades the privacy of millions of children.

We’ll continue to fight student surveillance and censorship, and we are heartened to see students fighting back

The project grew out of our investigation of GoGuardian, computer monitoring software used in about 11,500 schools to surveil about 27 million students—mostly in middle and high school—according to the company. The software allows school officials and teachers to monitor student’s computers and devices, talk to them via chat or webcam, block sites considered “offensive,” and get alerts when students access content that the software, or the school, deems harmful or explicit.

Our investigation showed that the software inaccurately flags massive amounts of useful material. The software flagged sites about black authors and artists, the Holocaust, and the LGBTQ+ rights movement. The software flagged the official Marine Corps’ fitness guide and the bios of the cast of Shark Tank. Bible.com was flagged because the text of Genesis 3 contained the word “naked.” We found thousands more examples of mis-flagged sites.

EFF built the Red Flag Machine to expose the ludicrous results of GoGuardian’s flagging algorithm. In addition to reading our research about the software, you can take a quiz that presents websites flagged by the software, and guess which of five possible words triggered the flag. The results would be funny if they were not so potentially harmful.

Congress Takes Aim At Students and Young People

Meanwhile, Congress this year resurrected the Kids Online Safety Act (KOSA), a bill that would increase surveillance and restrict access to information in the name of protecting children online—including students. KOSA would give power to state attorneys general to decide what content on many popular online platforms is dangerous for young people, and would enable censorship and surveillance. Sites would likely be required to block important educational content, often made by young people themselves, about how to deal with anxiety, depression, eating disorders, substance use disorders, physical violence, online bullying and harassment, sexual exploitation and abuse, and suicidal thoughts. We urged Congress to reject this bill and encouraged people to tell their senators and representative that KOSA will censor the internet but not help kids. 

We also called out the brazen Eyes on the Board Act, which aims to end social media use entirely in schools. This heavy-handed bill would cut some federal funding to any school that doesn’t block all social media platforms. We can understand the desire to ensure students are focusing on schoolwork when in class, but this bill tells teachers and school officials how to do their jobs, and enforces unnecessary censorship.

Many schools already don’t allow device use in the classroom and block social media sites and other content on school issued devices. Too much social media is not a problem that teachers and administrators need the government to correct—they already have the tools and know-how to do it.

Unfortunately, we’ve seen a slew of state bills that also seek to control what students and young people can access online. There are bills in Texas, Utah, Arkansas, Florida, Montana, to name just a few, and keeping up with all this bad legislation is like a game of whack a mole.

Finally, teachers and school administrators are grappling with whether generative AI use should be allowed, and if they should deploy detection tools to find students who have used it. We think the answer to both is no. AI detection tools are very inaccurate and carry significant risks of falsely flagging students for plagiarism. And AI use is growing exponentially and will likely have significant impact on students’ lives and futures. They should be learning about and exploring generative AI now to understand some of the benefits and flaws. Demonizing it only deprives students from gaining knowledge about a technology that may change the world around us.

We’ll continue to fight student surveillance and censorship, and we are heartened to see students fighting back against efforts to supposedly protect children that actually give government control over who gets to see what content. It has never been more important for young people to defend our democracy and we’re excited to be joining with them. 

If you’re interested in learning more about protecting your privacy at school, take a look at our Surveillance Self-Defense guide on privacy for students.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *