WHU
09/14/2023

How Artificial Intelligence Misleads Us to “Justify” Our Bad Actions

Why we can’t fully trust artificial intelligence—even if its suggestions have gained importance

Rainer M. Rilke - 14. September 2023

Tips for Practitioners

Little by little, AI has become an indispensable consulting tool, one capable of influencing our daily lives. ChatGPT and Amazon’s Alexa—each with over 100 million users—are a lot more than merely convenient; they have become vital resources in a variety of common situations. And both of them only represent a small portion of the widespread use of AI seen today for one particular application: natural language processing (NLP), which allows computers to understand, interpret, and manipulate human language. In most cases, people are incapable of even recognizing whether something has been written by man or machine. The quality of these texts is often quite high, and their content plausible.

AI wants to produce results—regardless of the morality of its approach

Large companies such as social media platform LinkedIn and real estate website Zillow have already begun to employ AI-based help tools. These make use of NLP algorithms that analyze the sales-focused dialogue of the company’s employees and give them tips for how to seal even better deals. If an algorithm concludes that acting unscrupulously toward a customer would be beneficial and yield a higher sales number, it might recommend the worker to do just that. AI is programmed to chase after a specific goal, which is often to maximize profit. Accordingly, AI-supported advisor tools will recommend the best possible course of action to do so—even if following that course could be unethical.

Dishonest behavior gets encouraged, honest behavior not so much

So, how do people respond to a moral dilemma where being dishonest would be more beneficial to them than acting honestly? People are generally quite reticent to accept the advice of others, believing themselves to be more capable of reaching the best decision. The influence that advice may have in a moral dilemma is unclear. Take, for example, a salesman advising a potential customer. A salesman who explains all the pros and cons of buying a specific product, even at risk of losing that customer, shows their colleagues how to react in similar situations, thereby acting as a sort of moral compass. In the end, some could be convinced to put aside any gains for the sake of acting ethically. Advice that requires one to act unethically—for example, not stating all the predetermined cons of a product—carries the same weight and could give other coworkers a justification for breaking moral code.

Astonishingly, people are far quicker to accept advice that would require them to act dishonestly than to accept advice requiring them to act with scruples—regardless of whether that’s coming from AI or from another human being. If the advising entity makes an immoral suggestion that promises success, the advisee is often prepared to trick and lie to others to their own advantage. And they would hold AI partially responsible for their unethical actions, just as they would a human advisor. The same applies to immoral decisions made in the middle of a moral dilemma. In addition to that, receiving advice that requires dishonest behavior further reinforces how dishonest a person will be. This is not the case when receiving advice that requires honest behavior.

When there are personal gains to be had, people are easily swayed by AI to act dishonestly. Helpful though it may be in daily life, AI conceals the potential to manipulate people and perpetuate unethical behavior. In other words—be careful! The advice that AI-based consulting tools give should always be scrutinized.

How can we work responsibly with AI-generated advice in the future?

With the observance of moral guidelines so important for societal harmony, we have to think about the ways we can responsibly use AI-based consulting tools in the future. Merely labeling advice as having been generated by AI is simply not enough. Users will still accept it just as if a human had been the one to give it to them—even if they’re aware that they are only talking to an AI program. The generation of unethical advice, whether intentional or not, must be considered when regulating AI usage and prevented from the start. And it’s up to the political sphere and academic world to allocate more resources for developing solutions to this problem. The developers behind these tools have to be aware of the possible effects their work could have on society—and ensure that their AI cannot issue unethical advice.

Tips for Practitioners

  • Be wary when you receive advice online! AI could advise you (perhaps unintentionally) to act unethically in a bid to help you realize your goal(s).
  • Given that AI-generated advice can have the same effects as advice coming from another human (regardless of whether said advice requires ethical or unethical action), it is of particular importance that AI-based systems be designed to consider morality. Companies that employ AI should check how these systems have been programmed in order to prevent them from accidentally encouraging unethical behavior.
  • If you are a programmer or a policymaker, strive to be more aware of the negative consequences that stem from following unethical AI-generated advice. Allocate the resources necessary to combat this phenomenon. For if it continues—and more and more unethical advice is given based solely on its economic advantageousness—the world will have a big problem on its hands.

Literature and methodology

- Leib, M./Köbis, N./Rilke, R. M./Hagens, M./Irlenbusch, B. (2023): Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty, in: The Economic Journal, forthcoming.

Co-author of the study

Assistant Professor Rainer Michael Rilke

Rainer Michael Rilke is an Assistant Professor of Business Economics at the IHK-Chair for Small and Medium-Sized Enterprises at WHU – Otto Beisheim School of Management. He focuses his research and teaching efforts on experimental economics and analyzing human behavior in social contexts. His work considers levels of honesty, deceit, and corruption as it pertains to management, moments of team friction, and AI-generated advice. The primary objective of his research is to gain insights into the factors that shape human behavior in social and economic environments and understand how to incentivize through ethical and efficient decision-making.

WHU