AI wins art award: The challenges of a more human machine

Human Rights

2022 had an unusual blue-ribbon winner for emerging digital artists; Jason Allen’s winning work Théâtre D’opéra Spatial was created with an AI Generative model called Midjourney, and the artist was formally listed as “Jason M Allen via Midjourney”. Other artists were horrified, prompting a vicious backlash, with one of them tweeting, “This thing wants our jobs, it’s actively anti-artist.” Allen was unrepentant, saying he won fair and square, without breaking any rules. He added, for good measure, “This isn’t going to stop. Art is dead, dude. It’s over. AI won. Humans lost.”

Midjourney is one of the rash of AI-generated Transformer or Generative or Large Language Models (LLMs) which have exploded onto our world in the last few years. Earlier models like BERT and Megatron (2019) were relatively small models, with up to 174 GB of dataset size, and passed under the collective public radar. GPT3, released by OpenAI with a 570 GB dataset and 175bn parameters was the first one to capture the public consciousness with some amazing writing and composition skills. The real magic, however, started with Transformers which could create beautiful and realistic pieces of art with just a text prompt — OpenAI’s DALL-E2, Google’s Imagen, the open-source Stable Diffusion and, obviously, Midjourney. Not to be left behind, Meta unleashed a transformer which could create videos from text prompts. Then, later in 2022 came the transformer to rule them all — ChatGPT — built on GPT3, but with capabilities to have real conversations with human beings. Next year promises a Cambrian explosion of transformer models, with pundits rhapsodising about how these models have made the famed Turing Test obsolete, displaced search, and accelerated our journey to the holy grail of AGI or Artificial General Intelligence.

There is so much to say about these models, but today I will focus on something even the most enthusiastic users are wrestling with: Are these models ethical? Ethics is too complex a subject to address in one short column but let me focus on the three big ethical questions on these models that humanity will have to address in short order.

Environmental: Most of the bad rap goes to crypto and blockchain, but the cloud and these AI models running on it take enormous amounts of energy. Training a large transformer model just once would have CO2 emissions equivalent to 125 roundtrips from New York to Beijing. And this is for just training it once for a model with just 213mn parameters; GPT3 has 175bn and the incoming GPT4 has a rumoured 100trn! When we hear the word “cloud”, we think of it as a wafting fluffy thing in the sky. This “cloud” is the hundreds of data centres that dot our planet, and they guzzle water and power at alarming rates. Ben Tarnoff wrote in The Guardian that data centres currently consume 200 terawatt hours per year — roughly the same amount as South Africa, and are likely to grow 4 to 5 times by 2030, which would put the cloud on par with Japan, the fourth-biggest energy consumer! “The cloud” says author Kate Crawford “is made of rocks and lithium brine and crude oil.”

Bias: The other thorny ethical issue is that sheer size does not guarantee diversity. Timnit Gebru was with Google when she co-wrote a seminal research paper calling these LLMs ‘stochastic parrots’, because, like parrots, they just repeated a senseless litany of words without understanding their meaning and implications. Children are taught to use words wisely, since they have the power to hurt and heal. These models, however, have no inkling of that and spew out probabilistically coherent combinations of words or pictures from the data they are trained on. A large training set for GPT, for example, was Reddit and Wikipedia: 67 per cent of Reddit users in the US are men, and two-thirds are overwhelmingly young, as you would expect. Women made for less than 15 per cent of all Wikipedians. DALL-E 2 skews towards creating images of white men more, and reportedly oversexualises images of women. Again, this is because it is trained on the massive amount of open-source images on the Internet, and the imagery and language on the internet is still overwhelmingly Western, male, and sexist (for example: Men being called doctors, and women being referred to as “women doctors”).

Plagiarism: The third prickly ethical issue, which also prompted the artist backlash to Allen’s award-winning work is that of plagiarism. If Stable Diffusion or DALL-E 2 did all the work of scouring the web and combining multiple images (a Pablo Picasso Mona Lisa, for example), who owns it — Allen, the generative AI model, or the many artists including Picasso and da Vinci whose original pictures and compositions were mashed together to create a new artwork? Currently, OpenAI has ownership of all images created with DALL-E, and their business model is to allow paid users to have rights to reproduce, paint, sell and merchandise images they create. This is a legal minefield — the US Copyrights office recently refused to grant a copyright to a piece created by a generative AI called Creativity Machine, but South Africa and Australia have recently announced that AI can be considered an inventor.

Besides the legal quagmire, there is a bigger fear: This kind of cheap, mass-produced art could put artists, photographers, and graphic designers out of their jobs. A machine is not necessarily creating art, it is crunching and manipulating data and it has no idea or sense of what and why it is doing so. But it can do so cheaply, and at scale. Corporate customers might seriously consider it for their creative, advertising, and other needs.

Legal and political leaders across the world are sounding the alarm about the ethics of large generative models, and for good reason. As these models become increasingly powerful in the hands of Big Tech, with their unlimited budgets, brains and computing power, these issues of bias, environmental damage and plagiarism will become even more fraught. Early signs are not encouraging: Timnit Gebru, after publishing her prescient paper on these dangers, was summarily fired from Google.

The writer is the author of The Tech Whisperer, teaches at Ashoka, and is completing his Masters in AI, Ethics and Society at Cambridge University

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *