Dan Ariely

DESCRIPTION:
Psychologist and behavioural economist Dan Ariely is known worldwide. Yet, his famous study on honesty is under suspicion: data falsified, findings refuted—psychologist Dan Ariely and the case that shook the scientific world.
Ariely, the data and the half-truth: results apparently falsified, incorrect or inexplicable
In August 2021, an anonymous post appeared on the data analysis website DataColada. Its subject was a single Excel file. The finding: the data contained therein did not behave like real-world data. It looked as though it had been fabricated. What followed was one of the most revealing debates on research integrity in recent years, and a lesson in how incentives, fame and weak institutional controls can interact in the scientific community.
The psychologist and behavioural economist Dan Ariely is regarded as one of the world’s best-known science communicators. His books have become bestsellers, his TED Talks have been viewed millions of times, and his research has shaped debates on human behaviour in politics, economics and everyday life. The fact that he, of all people, is at the centre of a debate about fabricated data is more than a scandal. It is a diagnosis.
The study in the academic journal and a suspicious date
In 2012, Dan Ariely and his colleagues published a study in the academic journal PNAS (Proceedings of the National Academy of Sciences). The study on honesty described a field experiment involving an American car insurance company. Insurance customers were asked to state their current mileage. The experimental manipulation was simple: one group signed a declaration of honesty right at the start of the form, the other only at the end. According to the authors, those who signed right at the start provided significantly more honest information.
This sounded plausible, was well presented, quickly became the central argument of a bestseller, and found its way into government advisory bodies across several continents. The psychological principle that moral reminders are most effective when they precede the action seemed to be confirmed.
Yet the date in the Excel file's metadata told a different story. Ariely was listed as the creator and last editor of the file before it was forwarded to co-author Nina Mazar. And the data itself behaved statistically in a way that real odometer readings do not: no rounding, no clusters around psychologically significant numbers, but instead an almost uniform distribution across the entire range of values. Such a pattern is simply inexplicable in human self-reports. It is the pattern that arises when numbers are not measured, but generated.
The dataset under the microscope: suspicion becomes certainty
The DataColada authors presented their findings systematically. In addition to the distribution anomaly, they found another curious feature: the base mileage figures in the dataset were formatted in two different fonts, Calibri and Cambria, and were almost exactly the same: 6,744 values in one, 6,744 in the other. Such a split never occurs in a real data entry process.
Added to this was a striking discrepancy in the sample size. The study claimed to have analysed data from over 13,000 policies and 20,000 vehicles. The insurance company later stated that it had supplied data on only 3,700 policies and around 6,000 vehicles. The company conducted its own internal audit and confirmed in writing that the data set provided had been “manipulated and supplemented with synthesised or fabricated data”. Their own replication found no significant effect.
To blame Ariely for the data in this context is more than just an accusation; it calls the entire research project into question. Ariely admitted that the data had been falsified, but denied having carried out the falsification himself. The insurance company rejected this account. The file’s metadata contradicted Ariely’s version of events.
As reported by sz.de and other media outlets, a second prominent figure in behavioural economics was already in a similar situation: Francesca Gino, a professor at Harvard Business School and known for her research on dishonest behaviour, faced similar allegations. The timing was no coincidence; it points to a structural problem, not merely individual errors.
Superstar of science, and the logic of the market
In the years leading up to the controversy, Dan Ariely of Duke University had established an institutional standing in academic psychology that is virtually unrivalled. When psychologist Dan Ariely appeared in the media and at conferences, he spoke not only to fellow academics, but to politicians, CEOs and millions of TED viewers. His influence was real and measurable: the Obama administration drew on findings from his research, and the Israeli government signed a contract reportedly worth around $ 17 million.
This superstar status is significant both psychologically and sociologically. It creates incentives that can conflict with scientific integrity. Anyone who is famous and whose research is commercially exploited – through consultancy contracts, lecture series, book deals – is under a pressure that differs fundamentally from that of an unknown basic researcher. This explains why scientific fraud is not coincidentally more prevalent among outstanding scientists.
The lie, if it were one, would then not be merely a psychological failure on the part of an individual. It would be the product of an incentive structure that rewards spectacular, media-friendly results and penalises quiet, inconclusive findings. Research that confirms what people want to hear finds an audience in the attention market. Research that contradicts expectations or finds nothing remains in drawers.
So-called replication research: the truth comes to light
The study on honesty was not the only one to come under fire. The entire paradigm of ‘moral cues’ – the idea that activating ethical schemas prevents dishonest behaviour – was replicated several times in the years that followed. A meta-analysis conducted in 2018 examined 19 independent replication attempts. The average effect was small and pointed in the opposite direction to the original claim. The studies from Ariely’s laboratory had consistently yielded the largest effects.
When original studies systematically produce stronger findings than all subsequent replications, and when these original studies all originate from the same laboratory, this pattern raises justified doubts.
Added to this was a lack of evidence about the data's origin. Ariely of Duke University and his co-authors were unable, upon request, to reconstruct where and how the laboratory data for the Ten Commandments experiments had been collected. A psychologist working on the other side of the country, who was named in the acknowledgements, stated that the study could not have been conducted as described. Sample size, incentive structure, logistical implementation – all clearly flawed.
What the case teaches us about research fraud
This case is not an isolated incident. It is a lesson in systemic weaknesses within science, and the psychologists and behavioural economists working in this field are well aware of this. Peer review does not examine raw data. A well-constructed, internally consistent dataset enables successful peer review just as reliably as a genuine one. The fact that someone has cheated is not apparent from the manuscript; it only becomes apparent when someone requests and analyses the raw data.
The reforms that could change this – such as pre-registration of hypotheses, mandatory publication of raw data and compulsory replication before any findings are adopted into policy – were not standard practice at the time of most of the studies discussed here. In large parts of the scientific community, they still are not. Yet institutional trust is quite clearly insufficient as a control mechanism.
For journalists, political advisers and informed members of the public, a simple checklist can be derived from this case: Is the raw data accessible? Are there independent replications with documented effect sizes? Was the result commercialised before its robustness was established? In the car insurance study, the answers were: no, no, yes. This is not a footnote; it is the pattern from which scientific misinformation arises.
Conclusion
Several independent lines of evidence, digital forensics, institutional contradictions on the part of the insurance company, systematic failures to replicate the results, and controversial data sources raise serious doubts about key publications that established the reputation of the world-renowned psychologist and behavioural economist Dan Ariely. Influence and persuasiveness are no substitute for data integrity. And science that is optimised for market logic reliably delivers results that merely sound good.
RELATED ARTICLES:
Self-Deception: Its Consequences and How to Recognise Self-Deception
Cognitive Dissonance Bombing – Abuse Through Manipulatively Generated Cognitive Dissonance
Understanding Deception and Manipulation in Daily Life
Detecting Deception: Practical Steps for Sharpening Your Intuition