EFF to Court: Chatbot Output Can Reflect Human Expression

Business

When a technology can have a conversation with you, it’s natural to anthropomorphize that technologyto see it as a person. It’s tempting to see a chatbot as a thinking, speaking robot, but this gives the technology too much credit. This can also lead peopleincluding judges in cases about AI chatbotsto overlook the human expressive choices connected to the words that chatbots produce. If chatbot outputs had no First Amendment protections, the government could potentially ban chatbots that criticize the administration or reflect viewpoints the administration disagrees with.

In fact, the output of chatbots can reflect not only the expressive choices of their creators and users, but also implicates users’ right to receive information. That’s why EFF and the Center for Democracy and Technology (CDT) have filed an amicus brief in Garcia v. Character Technologies explaining how large language models work and the various kinds of protected speech at stake.

Among the questions in this case is the extent to which free speech protections extend to the creation, dissemination, and receipt of chatbot outputs. Our brief explains how the expressive choices of a chatbot developer can shape its output, such as during reinforcement learning, when humans are instructed to give positive feedback to responses that align with the scientific consensus around climate change and negative feedback for denying it (or vice versa). This chain of human expressive decisions extends from early stages of selecting training data to crafting a system prompt. A user’s instructions are also reflected in chatbot output. Far from being the speech of a robot, chatbot output often reflects human expression that is entitled to First Amendment protection.

In addition, the right to receive speech in itself is protectedeven when the speaker would have no independent right to say it. Users have a right to access the information chatbots provide.

None of this is to suggest that chatbots cannot be regulated or that the harms they cause cannot be addressed. The First Amendment simply requires that those regulations be appropriately tailored to the harm to avoid unduly burdening the right to express oneself through the medium of a chatbot, or to receive the information it provides.

We hope that our brief will be helpful to the court as the case progresses, as the judge decided not to send the question up on appeal at this time.

Read our brief below.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *