Is Behavioral Economics Dead? Here’s What You Need to Know

Is Behavioral Economics Dead? Here’s What You Need to Know

Sara Isaac Aug 22, 2022 4 MIN READ

Sometime during the pandemic blur years, during an online conference, I ticked off one of the world’s pre-eminent behavioral economists with this up-voted question: Is behavioral economics responsible for encouraging a belief among decision-makers that tricky behavioral problems can always be solved with simple solutions? 

“Not guilty as charged,” was his immediate reply, followed by a spirited, if somewhat irritated, defense of the principals of Nudge and interventions such as Save More Tomorrow, which has helped millions of Americans — including employees at my agency — better prepare for retirement with set-it-and-forget-it annual increases in their 401(k) savings. 

Jason Hreha, however, thinks behavioral economics is 100% guilty — and not just of peddling silver-bullet solutions. Hreha, the polemical former global head of behavioral science at Walmart and founder of Dopamine and Persona, has been on a quest for years to put a stake in the heart of behavioral economics.

In 2020, Hreha penned this takedown, declaring behavioral economics a dead discipline. He recently piled on with several LinkedIn blog posts. The first discussed a letter in last month’s Proceedings of the National Academy of Sciences that called into question the efficacy of nudges. The second post — responding to pushback on the first — reminded readers that several comparative global studies have found the opt-out nudge that is famous for increasing organ donation doesn’t actually work

Is the reign of the ‘Nudge’ over? Should you care?

Hreha thinks applied behavioral economics is a zombie discipline that has been kept on life support by savvy public relations and academic self-interest. Here are three core areas of concern.

  • Publication bias. Academic journals sell articles — and academics make their careers — based on experimental findings with exciting results. At its worst, publication bias can entice the kind of p-hacking that eventually brought down former Cornell researcher Brian Wansink’s career. But even at its “best,” publication bias means the body of evidence in the academic record is skewed toward behavioral economics experiments that appear to work — leaving out a potentially much vaster body of evidence around interventions that do not.Many published studies also rely on small sample sizes to document effects that are hypothesized to hold true across much larger and more diverse populations. It’s important to remember that many of the foundational experiments of behavioral economics were often conducted with a dozen or so college students in a university lab. Critics understandably question the application of such findings to real world contexts.
  • Replication crisis. A scientific finding is confirmed when other scientists who follow the same experimental method produce the same results. The theory of gravity doesn’t hold true unless every time that you drop an apple (under normal sitting-in-the-shade-of-a-tree conditions) it falls to the Earth. Traditionally, academia has not funded or rewarded researchers who want to take on the hard and unsexy work of repeating others’ experiments to confirm results. But more recently, some behavioral economists have done just that — and turned up little consistent evidence for many of the core tenets of the discipline. In fact “loss aversion,” the famous Prospect Theory finding that put founding behavioral economists Amos Tversky and Daniel Kahneman on the map, has been called into question by new research.
  • Little bang for the buck. Another criticism of nudges and other “pure” behavioral economics interventions is that real world impact can be quite small. Hreha cites a 2020 study by two UC Berkeley researchers that analyzed 126 randomized controlled trials run by two of the largest “Nudge Units’ in the U.S. The researchers found the average effect size of interventions to be 1.4 percentage points, a far cry from the 8.7 percentage points suggested by academic literature — a difference the authors attributed to publication bias.The authors note that a 1.4 percentage point effect is still statistically significant — and others have pointed out that small effects across large populations can yield important behavioral impacts. Hreha, however, questions the opportunity cost, noting that the same amount of money and effort could have been better spent on interventions yielding more impactful results.
What’s a behavior change agent to do?

The beauty of classical economics — a made-up world where people are rational actors who make smart choices that optimize self-interest in all situations — is that it allows for complex modeling and forecasting. Even today, (often highly inaccurate) economic modeling underpins pretty much every governmental policy implemented anywhere.

Although many of Tversky and Kahneman’s early behavioral economics experiments gleefully poked holes in rational choice theory, behavioral economics still is imbued with the same promise as the old model — that proven theories can be applied consistently across multiple contexts. Behavioral economics posits that human creatures are irrational, but predictably so.

However, another behavioral economics tenet is often summed up as “context is everything.” 

Perhaps the main argument that Hreha makes that I agree with is that context and creativity are critical to successful behavior change — and in the search for cookie cutter solutions, both can get lost. As Hreha writes, In my experience, the best behavior change interventions come when creativity is informed by a scientific understanding of behavior — resulting in elegant, situation-specific solutions.”

In short, there are no silver bullets. But the science of human behavior — including an understanding of when nudges might work, perhaps as part of a multilayered intervention — is always a good place to start.

Sara Isaac is Chief Strategist at Marketing for Change.