Wednesday, 11 September 2013

How Nudges Work for Government (and Might Work Against Blueprint 2020)


by Kent AitkenRSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken


In the last few weeks some people took notice of the Behavioural Insights Unit in the U.K. government, which sparked debate about behavioural economics and the "nudge" approach to public policy and social outcomes. It seemed that there were many misconceptions about what nudges are. So a brief primer, then what I believe they could mean to Blueprint 2020.


Nudges


"Nudge theory... argues that positive reinforcement and indirect suggestions to try to achieve non-forced compliance can influence the motives, incentives and decision making of groups and individuals alike, at least as effectively – if not more effectively - than direct instruction, legislation, or enforcement."

An example of a nudge, in contrast to alternatives: recently, New York City tried to stop sales of massive cups of soda on the basis that it was bad for both citizens and society (in health and health care spending, respectively). Here are some options:


  • Banning the sale of large sodas, which was their attempted approach, would be regulation.
  • Raising taxes on large sodas would be an economic incentive.
  • Running ads promoting healthy lifestyles and noting the health risks of sugary soda would be education.
A nudge, on the other hand, would have been something like changing the range of cup sizes so that the biggest cups seem more excessive. Tim Horton's recently did exactly this in reverse, by shifting their cup sizes a notch. If the XL coffee is called a large, it sends a signal that it's more "normal" to drink that volume of coffee.

Nudges can work in conjunction with other policy levers, and can be surprisingly potent. For instance, a study showed that a sign showing speeders a frowning face caused people to slow down more than showing their speed and the associated fine.


A Misunderstood Policy Instrument...

The misconceptions arise from the goals nudges tend to be assigned to. Activities that are almost universally regarded as bad (e.g., theft) are typically covered by direct means, such as laws. Very few people are willing to argue that laws preventing theft unduly restrict personal freedoms. However, activities such as smoking cigarettes are trickier. Here, people invoked the principle that they have the right to make informed decisions about their lives, even if unhealthy. However, back when this debate was ongoing in Canada, it was estimated that cigarettes cost about four times as much in health care costs as they raised in tax revenue. So less smoking was good for Canada, on the whole.

There are many such social outcomes worth pursuing that exist in a gray area for government intervention, and this when is nudges tend to be the best policy instrument. So nudges get maligned as paternalism, big government, and the nanny state. But in reality, nudges are about the implementation method. What constitutes appropriate social outcomes is a completely different question. In considering the utility of nudges, we may as well assume that societal goals are already established, and we're instead at the point of selecting policy levers.


...With an Important Role to Play

So I see this emerging field simply as the recognition that information alone is not necessarily sufficient for people to make decisions that are in their, or society's, best interests. 

This is because humans respond (and wildly) to their environment. It's a fascinating evolutionary quirk for socializing: we instinctively match others' postures, gestures, and even accents to build familiarity, taking cues on what constitutes normal behaviour. However, we've exapted (adapted traits for very different purposes) some shortcuts and rules of thumb in decision-making. The well-known example is the opt-in/opt-out framework for organ donations. Countries get ~90% donor rates with an opt-out model ("check here to be removed from the organ donor list"), and more like ~10-20% with an opt-in model ("check here to be included on the organ donor list"). It's largely because the default choice sends a signal about what is normal.

And there are many such examples. The U.K. Behavioural Insights Team simply acknowledges this, and sets about designing policy instruments for such a world. Their core function isn't deciding what society should be doing; it's taking the scientific method and applying it to the complex world of policy. Developing hypotheses, testing them, and adjusting approaches as necessary (see: Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials).

In my view, nudges are scarcely controversial. Basically, if you conducted use testing on policy instruments and tweaked for maximum effect, sometimes you'd get education, sometimes laws, sometimes incentives, and sometimes these bizarre indirect methods that we're currently calling nudges. But it's not paternalism, it's simply a question of what policy instruments work, and work cost-effectively. So I absolutely think we should be exploring this field in earnest.


But Wait, You Mentioned Blueprint 2020

If the premises behind nudges are valid, I think it's important to consider how our organizations' standards, defaults, and procedures (in the parlance, "choice architecture") are affecting our decisions. Since June we've been having this wide-ranging conversation about the future of the public service, and the difficulty of meaningful change is a common theme (see: Where Good Ideas Go to Die and Moving Public Service Mountains, Part I)

And adding to the many possible reasons, what if, even when we have direction or policy cover for positive progress, we're continuously stacking the deck on the side of the status quo?

An example (which I overuse, but it's easy to explain and so I beg forgiveness): let's say from workflow and policy perspectives, a desk-bound worker and a mobile worker are theoretically equivalent options. The information about mobile efficacy is available, the forms for securing the equipment exist, and both are permissable arrangements. However, when on day one at a job you're assigned a desk, a desktop computer, a landline, and no VPN, what signal does that send to both manager and employee about what is normal and what is aberrant?

Do such procedural barriers become cognitive biases? I would suggest the answer is "yes, and massively."

So when we're looking at moving the public service towards our ideal for 2020, we should be ruthless in examining the environment in which we work. Ideals can’t simply be possible, if forces are nudging in the opposite direction. Ideals have to seem like the standard.

They have to seem downright normal.


Making the Vision a Reality

The U.K. provides another concept to borrow and remix (both internally and externally) - contestable policy. We consult on policy in development; why not solicit feedback on existing policy and process, to see if it works as intended, or if the environment to which it applies has changed?

And we have the tools. This could happen today. Copy and paste into our GC-wide platforms that happen to have discussion threads built in, and just ask: does this still work the way we thought it would?

It’s the same approach as the Behavioural Insights Team: the scientific method, applied to government.







No comments:

Post a Comment