“Please slow down”—The 7 biggest AI stories of 2022

Technology


Enlarge / AI image synthesis advances in 2022 have made images like this one possible, which was created using Stable Diffusion, enhanced with GFPGAN, expanded with DALL-E, and then manually composited together.

Benj Edwards / Ars Technica

More than once this year, AI experts have repeated a familiar refrain: “Please slow down.” AI news in 2022 has been rapid-fire and relentless; the moment you knew where things currently stood in AI, a new paper or discovery would make that understanding obsolete.

In 2022, we arguably hit the knee of the curve when it came to generative AI that can produce creative works made up of text, images, audio, and video. This year, deep-learning AI emerged from a decade of research and began making its way into commercial applications, allowing millions of people to try out the tech for the first time. AI creations inspired wonder, created controversies, prompted existential crises, and turned heads.

Here’s a look back at the seven biggest AI news stories of the year. It was hard to choose only seven, but if we didn’t cut it off somewhere, we’d still be writing about this year’s events well into 2023 and beyond.

April: DALL-E 2 dreams in pictures

A DALL-E example of
Enlarge / A DALL-E example of “an astronaut riding a horse.”

OpenAI

In April, OpenAI announced DALL-E 2, a deep-learning image-synthesis model that blew minds with its seemingly magical ability to generate images from text prompts. Trained on hundreds of millions of images pulled from the Internet, DALL-E 2 knew how to make novel combinations of imagery thanks to a technique called latent diffusion.

Twitter was soon filled with images of astronauts on horseback, teddy bears wandering ancient Egypt, and other nearly photorealistic works. We last heard about DALL-E a year prior when version 1 of the model had struggled to render a low-resolution avocado chair—suddenly, version 2 was illustrating our wildest dreams at 1024×1024 resolution.

At first, given concerns about misuse, OpenAI only allowed 200 beta testers to use DALL-E 2. Content filters blocked violent and sexual prompts. Gradually, OpenAI let over a million people into a closed trial, and DALL-E 2 finally became available for everyone in late September. But by then, another contender in the latent-diffusion world had risen, as we’ll see below.

July: Google engineer thinks LaMDA is sentient

Former Google engineer Blake Lemoine.
Enlarge / Former Google engineer Blake Lemoine.

Getty Images | Washington Post

In early July, the Washington Post broke news that a Google engineer named Blake Lemoine was put on paid leave related to his belief that Google’s LaMDA (Language Model for Dialogue Applications) was sentient—and that it deserved rights equal to a human.

While working as part of Google’s Responsible AI organization, Lemoine began chatting with LaMDA about religion and philosophy and believed he saw true intelligence behind the text. “I know a person when I talk to it,” Lemoine told the Post. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

Google replied that LaMDA was only telling Lemoine what he wanted to hear and that LaMDA was not, in fact, sentient. Like the text generation tool GPT-3, LaMDA had previously been trained on millions of books and websites. It responded to Lemoine’s input (a prompt, which includes the entire text of the conversation) by predicting the most likely words that should follow without any deeper understanding.

Along the way, Lemoine allegedly violated Google’s confidentiality policy by telling others about his group’s work. Later in July, Google fired Lemoine for violating data security policies. He was not the last person in 2022 to get swept up in the hype over an AI’s large language model, as we’ll see.

July: DeepMind AlphaFold predicts almost every known protein structure

Diagram of protein ribbon models.
Enlarge / Diagram of protein ribbon models.

In July, DeepMind announced that its AlphaFold AI model had predicted the shape of almost every known protein of almost every organism on Earth with a sequenced genome. Originally announced in the summer of 2021, AlphaFold had earlier predicted the shape of all human proteins. But one year later, its protein database expanded to contain over 200 million protein structures.

DeepMind made these predicted protein structures available in a public database hosted by the European Bioinformatics Institute at the European Molecular Biology Laboratory (EMBL-EBI), allowing researchers from all over the world to access them and use the data for research related to medicine and biological science.

Proteins are basic building blocks of life, and knowing their shapes can help scientists control or modify them. That comes in particularly handy when developing new drugs. “Almost every drug that has come to market over the past few years has been designed partly through knowledge of protein structures,” said Janet Thornton, a senior scientist and director emeritus at EMBL-EBI. That makes knowing all of them a big deal.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *