Home Blog What Are AI Agents, and Why Are They About to Be Everywhere?

What Are AI Agents, and Why Are They About to Be Everywhere?

4
0
What Are AI Agents, and Why Are They About to Be Everywhere?


All day, every day, you make choices. Philosophers have long argued that this ability to act intentionally, or with agency, distinguishes human beings from simpler life-forms and machines. But artificial intelligence may soon transcend that divide now that technology companies are building AI “agents”—systems able to make decisions and achieve goals with minimal human oversight.

Facing pressure to show returns on multibillion-dollar investments, AI developers are promoting agents as the next wave of consumer tech. Agents, like chatbots, leverage large language models and are accessible from phones, tablets or other personal devices. But unlike chatbots, which need constant hand-holding to generate text or images, agents can autonomously interact with external apps to perform tasks on behalf of individuals or organizations. OpenAI has listed the existence of agents as the third of five steps to building artificial general intelligence (AGI)—AI that can outperform humans on any cognitive task—and the company is reportedly slated to release an agent code-named “Operator” in January. That system could be an early drop in a downpour: Meta chief executive Mark Zuckerberg has predicted that AI agents will eventually outnumber humans. Some AI experts, meanwhile, fear that the commercialization of agents is a dangerous new step for an industry that has tended to prioritize speed over safety.

According to big tech’s sales pitch, agents will liberate human workers from drudgery, opening the door to more meaningful work (and big productivity gains for businesses). “By freeing us from mundane tasks, [agents] can empower us to focus on what truly matters: relationships, personal growth and informed decision-making,” says Iason Gabriel, a senior researcher at Google DeepMind. Last May the company unveiled a prototype of “Project Astra,” described as “a universal AI agent that is helpful in everyday life.” In a video demonstration, Astra speaks to a user through a Google Pixel phone and analyzes the environment via the device’s camera. At one point the user holds the phone up to a colleague’s computer screen, which is filled with lines of code. The AI describes the code—it “defines encryption and decryption functions”—in a humanlike female voice.


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Project Astra isn’t expected to be publicly released until next year at the earliest, and currently available agents are mostly limited to monotonous labor, such as writing code or filing expense reports. This reflects both technical limitations and developers’ wariness about trusting agents in high-stakes arenas. “Agents should be deployed to implement menial and repetitive tasks” that can be “very clearly defined,” says Silvio Savarese, chief scientist at cloud-based software company Salesforce. The company recently introduced Agentforce, a platform offering agents that can field customer service questions and perform other narrow functions. Savarese says he’d “feel very hesitant” to trust agents in more sensitive contexts such as legal sentencing.

Although Agentforce and similar platforms are mostly marketed toward businesses, Savarese predicts the eventual rise of personal agents, which could access your personal data and constantly update their understanding of your needs, preferences and quirks. An app-based agent tasked with planning your summer vacation, for example, could book your flights, secure tables at restaurants and reserve your lodging while remembering your preference for window seats, your peanut allergy and your fondness for hotels with a pool. Crucially, it would also need to respond to the unexpected: if the best flight option were fully booked, it’d need to adjust course (by checking another airline, perhaps). “The ability to adapt and react to an environment is essential for an agent,” Savarese says. Early iterations of personal agents may already be on the way. Amazon, for example, is reportedly working on agents that’ll be able to recommend and buy products for you based on your online shopping history.

What Makes an Agent?

The sudden surge of corporate interest in AI agents belies their long history. All machine-learning algorithms are technically “agentic” in that they constantly “learn,” or refine their ability to achieve specific goals, based on patterns gleaned from mountains of data. “In AI, we have, for decades, viewed all systems as agents,” says Stuart Russell, a pioneering AI researcher and computer scientist at the University of California, Berkeley. “It’s just that some of them are very simple.”

But modern AI tools now are becoming more agentic, thanks to some new innovations. One is the ability to use digital tools such as search engines. Through a new “computer use” feature released for public beta testing in October, the model behind AI company Anthropic’s Claude chatbot can now move a cursor and click buttons after being shown screenshots of a user’s desktop. A video released by the company shows Claude filling out and submitting a fictional vendor request form.

Agency also correlates with an ability to make complex decisions across time; as agents become more advanced, they’ll be put to work toward more sophisticated tasks. Google DeepMind’s Gabriel envisions a future agent that could help to discover new scientific knowledge. And this might not be far away. A paper posted to the preprint server arXiv.org in August outlined an “AI Scientist” agent capable of formulating new research ideas and testing them through experimentation—effectively automating the scientific method.

Despite the close ontological associations between agency and consciousness, there’s no reason to believe that, in machines, advances in the former will lead to the latter. Tech companies certainly aren’t advertising these tools as having anything close to free will. It is possible users might treat agentic AI as if it were sentient—but that would reflect, more than anything else, the millions of years of evolution that have hardwired people’s brain to attribute consciousness to anything that seems human.

Burgeoning Challenges

The rise of agents could present new challenges in the workplace, on social media and the Internet and for the economy. Legal frameworks that have been carefully crafted over decades or centuries to constrain the behavior of human beings will need to account for the sudden introduction of artificial agents, whose behavior fundamentally differs from our own. Some experts have even insisted that a more accurate description of AI is “alien intelligence.”

Take the financial sector, for example. Algorithms have long helped track the prices of various goods, adjusting for inflation and other variables. But agentic models are now starting to make financial decisions for individuals and organizations, potentially raising a host of thorny legal and economic questions. “We haven’t created the infrastructure to integrate [agents] into all the rules and structures we have to make sure our markets behave well,” says Gillian Hadfield, an expert in AI governance at Johns Hopkins University. If an agent signs a contract on behalf of an organization and later violates the terms of that agreement, should the organization be held accountable—or the algorithm itself? By extension, should agents be granted legal “personhood,” like corporations are?

Another challenge is designing agents whose behavior conforms with human ethical norms—a problem known in the field as “alignment.” As agency increases, it becomes harder for humans to decipher how AI is making decisions. Goals get broken down into increasingly abstract subgoals, and models occasionally display emergent behaviors that are impossible to predict. “There’s a really clear path from having agents that are good at planning to loss of human control,” says Yoshua Bengio, a computer scientist who helped to invent neural networks, which are enabling the current AI boom.

According to Bengio, the alignment problem is compounded by the fact that the priorities of big tech companies tend to be at odds with those of humanity at large. “There’s a real conflict of interest between making money and protecting the safety of the public,” he says. In the 2010s the algorithms used by Facebook (now Meta), given the seemingly benign goal of maximizing user engagement, started promoting hateful content to users in Myanmar against that country’s minority Rohingya population. That strategy—which the algorithms decided to pursue entirely on their own, after learning that inflammatory content was more conducive to user engagement—ultimately helped fuel an ethnic cleansing campaign that killed thousands of people. The risk of misaligned models and human manipulation will likely increase as algorithms become more agentic.

Watchdogs for Agents

Bengio and Russell have argued that regulating AI is necessary to avoid repeating past mistakes or being caught unaware by new ones in the future. Both scientists are among 33,000 signatories of an open letter, published in March 2023, that called for a six-month pause on AI research to establish guardrails. As tech companies race ahead to build agentic AI, Bengio urges the precautionary principle: the idea that powerful scientific advancements should be scaled slowly and commercial interests should take a back seat to safety.

That principle is already the norm in other U.S. industries. A pharmaceutical company can’t release a new drug until it has undergone rigorous clinical trials and received approval from the Food and Drug Administration; an airplane manufacturer can’t launch a new passenger jet without certification from the Federal Aviation Administration. While some early regulatory steps have been taken—most notably President Joe Biden’s executive order on AI (which President-elect Donald Trump has vowed to repeal)—no comprehensive federal framework currently exists to oversee the development and deployment of AI.

The race to commercialize agents, Bengio warns, could quickly push us past a point of no return. “Once we have agents, they’ll be useful, they’ll have economic value, and their value will grow,” he says. “And once governments understand that they could also be dangerous, it might be too late because the economic value will be such that you can’t stop it.” He compares the ascendancy of agents to that of social media, which, in the 2010s, quickly outpaced any chance of effective government oversight.

As the world prepares to greet a flood of artificial agency, there’s never been a more pressing time to exercise our own. As Bengio puts it, “We need to think carefully before we jump.”

WordPress Plugins

LEAVE A REPLY

Please enter your comment!
Please enter your name here