Are Social Desirability Scales Even Desirable?
Why it is impossible to give the right response with social desirability scales
Lukas Lanz / Isabel Thielmann / Fabiola Gerpott - October 17, 2022
A commonly used tool in the job candidate selection process is asking said applicants to fill out surveys designed to assess their personality. Responses to such surveys are often affected by social desirability bias. That is, applicants may feel the need to present themselves in an overly positive way to make a good impression. A meta-analysis performed by researchers from WHU – Otto Beisheim School of Management and the Max Planck Institute for the Study of Crime, Security and Law has revealed that social desirability scales, designed to uncover these biased responses, are neither valid nor practically applicable.
In the 1950s, scholars and professionals developed social desirability scales to mitigate any inherent bias in such surveys. These scales contain certain statements that reflect an ideal, something highly socially desirable—and yet statistically improbable. For instance, applicants need to indicate their agreement with statements such as “I never lie.” These social desirability scales were initially intended to measure response style, i.e., the level of overly positive self-presentation. Because it is unlikely that anybody could truthfully agree with such improbable statements, the recruiters regard any such responses as lies. As a consequence, they also discount the applicant’s other responses, e.g., on their reliability, motivation, or punctuality. In reality, these scales are predicated on the idea that one cannot fully trust an applicant’s responses.
All that said, there are two major problems with social desirability scales. First, there is a practical problem. Imagine a motivated job applicant has submitted their cover letter, CV, and additional materials to a company. The recruiter, in response, sends them a survey containing social desirability items that ask the applicant for their agreement with statements such as “I never cover up my mistakes” or “I never hesitate to help someone in case of emergency.” Depending on the responses to both statements, what would happen?
- Option A: The respondent signals agreement.
- Option B: The respondent signals disagreement.
Option A would raise a red flag for the recruiter: It is unlikely that such a statement is true, and therefore, the applicant must be lying—not only here but in other responses, too. Option B, by comparison, would characterize the applicant as someone who cannot be trusted or relied on—and, therefore, the recruiter would not want them as an employee in their company. To put it plainly, neither of the two potential responses would result in a favorable outcome for the applicant. This puts the applicant in an uncomfortable spot where they cannot make a “right choice.”
Second, there is a growing concern regarding the validity of the social desirability scales. In more recent research, scholars have argued against the “lie scale” interpretation. Instead, they propose that social desirability scales actually measure substance, i.e., true virtue and honest behavior. According to the substance interpretation, agreement with a social desirability item would indicate a truthful answer. This interpretation means that a high value response (here and elsewhere in the survey) indicates reliability. Thus, depending on the recruiter’s interpretation as “response style” or “substance,” the same response given by an applicant would result in a completely different outcome.
To inform the debate on “style versus substance,” we conducted a meta-analysis on 41 published and unpublished studies. A meta-analysis aggregates and analyzes the results of multiple studies, thereby summarizing the collective findings in the field and enabling researchers to draw reliable conclusions based on a large scale of evidence. We find that social desirability scales are unable to clearly measure exclusively style or substance. Rather, it is likely that they measure a mixture of both. As such, social desirability scores cannot be interpreted incontestably, making them useless for professionals. They should, therefore, not be used in practice.
Tips for practitioners
- As a recruiter, refrain from using social desirability scales in assessment processes. If social desirability scales are employed to measure response style, understand that the applicant cannot make a satisfactory choice. Either they agree with the social desirability statements, and their responses are regarded as lies; or they disagree and present themselves in a bad light.
- Remember that these scales neither measure substance nor style exclusively. As a recruiter, you cannot make valid inferences from any data collected through them.
- Reflect on your assessment process and how certain measures might impact potential candidates. Putting applicants in uncomfortable situations does not improve their perception of your company.
- Employ a holistic hiring approach instead of relying on decades-old processes. The ever-evolving workplace is characterized by digitalization, remote work, and flexible working schedules. Companies must further develop their hiring processes to attract the best talent. For instance, they could consider additional aspects (e.g., cultural fit) to find potentially promising candidates. Empion, started by alumni of WHU, offers candidates and companies the opportunity to check if their respective ideas about the corporate culture match before even starting the application process.
- Lanz, L./Thielmann, I./Gerpott, F. H. (2022): Are social desirability scales desirable? A meta-analytic test of the validity of social desirability scales in the context of prosocial behavior, in: Journal of Personality. https://onlinelibrary.wiley.com/doi/full/10.1111/jopy.12662
Lukas Lanz is a doctoral candidate at the Chair of Leadership at WHU – Otto Beisheim School of Management. His research primarily aims to examine and improve practices around humans in the modern-day work environment. Next to his research on social desirability scales, he focuses on the interaction between humans and artificial intelligence, with a special emphasis on AI leadership.
Dr. Isabel Thielmann
Isabel Thielmann is the Head of the Independent Research Group Personality, Identity, and Crime at the Max Planck Institute for the Study of Crime, Security and Law. She primarily conducts research in the fields of personality and individual differences, (un)ethical decision-making, and prosocial behavior.
Professor Fabiola H. Gerpott
Fabiola H. Gerpott is an expert in leadership, diversity management, and organizational behavior at WHU – Otto Beisheim School of Management. She is committed to ensuring that diversity is valued more by both managers and employees. Her research focuses on leadership communication, diversity, and how companies can effectively shape “New Work” environments.