Home Blog Please Don’t Take Moral Advice from ChatGPT

Please Don’t Take Moral Advice from ChatGPT

10
0
Please Don’t Take Moral Advice from ChatGPT


Should I tell my friend their boyfriend is cheating on them? Should I intervene when I hear an off-color joke?

When faced with moral questions—situations where the course of action relates to our sense of right and wrong—we often seek advice. And now people can turn to ChatGPT and other large language models (LLMs) for guidance, too.

Many people seem satisfied by the answers these models offer. In one preprint study, people rated the responses that LLMs produced when presented with moral quandaries as more trustworthy, reliable and even nuanced than those of New York Times ethicist columnist Kwame Anthony Appiah.


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


That study joins several others that together suggest LLMs can offer sound moral advice. Another, published last April, found that people rated an AI’s reasoning as “superior” to a human’s in virtuousness, intelligence and trustworthiness. Some researchers have even suggested that LLMs can be trained to offer ethical financial guidance despite being “inherently sociopathic.”

These findings imply that virtuosic ethical advice is at our fingertips—so why not ask an LLM? But this takeaway has several questionable assumptions behind it. First, research shows that people do not always recognize good advice when they see it. In addition, many people think the content of advice—the literal words, written or spoken—is most important when considering its value, but social connection may be particularly important for tackling dilemmas, especially moral ones.

In a 2023 paper, researchers analyzed many studies to examine, among other things, what made advice most persuasive. The more expert people perceived an advice giver to be, it turned out, the more likely they were to actually take their counsel. But perception need not align with actual expertise. Furthermore, experts are not necessarily good advice givers, even for the domain of their expertise. In one series of experiments where people learned to play a word search game, those who got advice from the game’s top scorers did not do any better than those coached by low-scoring players. People who perform well at a task don’t always know how they do what they do and cannot advise someone else on how to do it.

People also tend to think neutral, factual information will be more informative than the subjective details of, say, a firsthand account. But that’s not necessarily the case. Consider a study in which undergraduate students came into the lab for speed-dating sessions. Before each date, they were presented with either the profile of the person they were about to meet or a testimony describing another student’s experience with the activity. Even though participants expected the factual information about their date to be a superior predictor of how the session would go, people who read someone else’s testimony made more accurate forecasts about their experiences.

Of course, ChatGPT cannot draw from lived experience to provide counsel. But even if we could ensure that we receive (and recognize) quality advice, there are other social benefits that LLMs cannot replicate. When we seek moral advice, we are likely sharing something personal—and, often, we want intimacy more than instruction. Engaging in self-disclosure is a known way to quickly feel close to someone. Over the course of the conversation, the adviser and the advisee may seek and establish a shared reality—that is, the sense of commonality in inner states such as feelings, beliefs and concerns about the world—and that, too, promotes closeness. Although people may feel that they are establishing a sense of closeness and shared reality with an LLM, the models are not good long-term substitutes for interpersonal relationships, at least for now.

Of course, some people may want to sidestep social interaction. They may worry that their conversations will be awkward or that friends will feel burdened by having to share their problems. Yet research consistently finds that people underestimate how much they enjoy both brief, spontaneous conversations and deep, heartfelt ones with friends.

With moral advice, we should be particularly careful—it has the added quirk of feeling more like objective fact than an opinion or preference. Your (or my) take on whether salt and vinegar is the best potato chip flavor is subjective. But ideas such as “stealing is bad” and “honesty is good” feel definitive. As a result, counsel that comes with lots of moral justification can seem especially compelling. For that reason, it is advisable to carefully evaluate any one instance of moral advice from any adviser, AI or human.

Sometimes the best way to navigate debates that are charged with convictions about the moral high ground is to reframe them. When people have strong moral beliefs and see an issue in very black-and-white terms, they can become resistant to compromise or other practical forms of problem solving. My past work suggests that when people moralize risky sexual behavior, cigarette use or gun ownership, they are less likely to support policies that reduce harm linked to those behaviors because the policy still allows these actions. By contrast, people do not worry about reducing harm for behaviors that seem outside the purview of morality, such as wearing a seatbelt or helmet. Shifting perspective, from a moral lens to a practical one is already difficult for a person to do, and likely too much for an LLM, at least in their current iterations.

And that brings us to one more concern regarding LLMs. ChatGPT and other language models are very sensitive to the way that questions are asked. As a study published in 2023 demonstrated, LLMs will give inconsistent and sometimes contradictory moral advice from one prompt to the next. The ease with which a model’s answers can be shifted should prompt us to take a beat. Interestingly, that same study found that people did not believe the model’s counsel swayed their judgment, yet study participants who received and read LLM-generated advice showed a greater tendency to act in accordance with that guidance than a similar group of people who did not read the LLM’s messages. In short, the LLM’s input had influenced people more than they realized.

When it comes to LLMs, proceed with caution. People are not the best at gauging good advisers and good advice, especially in the moral domain, and we often need true social connection, validation and even challenges more than we need an “expert” response. So you can ask an LLM, but don’t leave it at that. Ask a friend, too.

Are you a scientist who specializes in neuroscience, cognitive science or psychology? And have you read a recent peer-reviewed paper that you would like to write about for Mind Matters? Please send suggestions to Scientific American’s Mind Matters editor Daisy Yuhas at dyuhas@sciam.com.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

WordPress Plugins

LEAVE A REPLY

Please enter your comment!
Please enter your name here