Showing posts with label design thinking. Show all posts
Showing posts with label design thinking. Show all posts

Friday, 10 February 2017

Public sector innovation


by Kent AitkenRSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken


This post is a response to an email from a provincial government colleague, who asked for frameworks and models for organizational innovation. I, perhaps, got carried away; it's a little lengthy for a blog post (here's a print-friendly version). I'd like to consider it an evergreen draft for the time being, as I'm sure I'm missing major chunks or have the blinders on for parts; I welcome comments on the working document. - Kent

Introduction, and innovation defined


It’s important to make sure that everyone involved in your conversation about innovation is using roughly the same meaning; innovation gets overused to the point where it’s amorphous and meaningless. For this document, we mean change or novelty - so change to existing concepts, or new concepts - that gets implemented at scale and that creates value. Here’s NESTA’s model:


Source: NESTA

Two takeaways from the above: 1) good ideas that flop during testing aren’t innovations, but they are part of the innovation process; 2) “making the case” comes after testing.

It’s unhelpful to think of innovation as a skill, or of people as innovative. Innovation is a discipline that can be practiced, and some people practice it - by inclination, affinity, or opportunity - more than others, and to greater effect. It is better understood as a process than an event.

At the core of innovation is the idea of problem definition: know what problem you’re trying to solve. (Put differently, by Lincoln: “If I had five minutes to chop down a tree, I'd spend the first three sharpening my axe.”) This applies both to individual efforts towards innovation, and the meta-level question of innovation in government.

Turning to that problem: is the question fostering innovation? Encouraging innovation? Allowing innovation? Facilitating innovation? Supporting innovation? A combination of all of the above? Even a seemingly innocuous change in what verb we’re using reflects very different mental models about how innovation happens, and what stops it.

The discourse about public sector innovation jumps too quickly from problem to solution, from the question “Why aren’t we innovating more now?”. (For starters, sometimes you actually just don't need it.) If you do, the solution is rarely about simply connecting ideas to senior executives. Surface-level fixes to perceived innovation gaps are what lead to Dragon’s Dens to pitch “innovative ideas,” under-supported innovation labs, hackathons to “just see what people come up with,” and crowdsourcing for the sake of testing the idea of crowdsourcing.

The need for innovation


The concept that “government needs innovation” is a second-order condition, not a starting point. In reality, it’s that - sometimes, in context - particular programs are better served by innovation than a continuation of previous activities. In aggregate, this can turn into “government needs more innovation.” But the question can’t be “How do we innovate more?”, it has to be “How do we create a system whereby the status quo and valuable alternatives are on a level playing field, such that we can test and prove new ideas as reliably as we accept old ones?”

To back up slightly, let’s consider an arc of innovation that is both an analogy and a predecessor, that of telecommunications. We’ve gone from letter-writing to printing presses, telegraphs, telephones, the internet, and now to low-cost ubiquitous mobile connections. Every combination of one-to-one, one-to-a-select-few, one-to-many, public forums, with every combination of attributed or anonymous, for every combination of formats, all at a vanishingly small cost.

But here's the key: at one point, to communicate long-distance you had one option: handwriting a letter. Later, you had two: handwriting a letter, or paying to have something reproduced many times on a printing press. You didn't have to rely on a letter when it wasn't the best option. As more and more options became available, you could match your communications goal more precisely to different ways to achieve it.

Likewise, now we have a wider range of policy development approaches and policy instruments, which means there’s a greater chance that we can match the right approach to the right situation - if we can deploy those approaches without artificial barriers.

Some common approaches


We shouldn’t slip into mental models that restrict ourselves to an established catalog of “innovations.” However, it might be useful to consider the toolkit that public sector organizations are often drawing on in efforts to innovate. Some of these are focused on inputs, some on outputs (that is, how we approach decisions on what to do, versus what we actually do).


  • Foresight: systematic exploration of a range of plausible futures for a field, technology, policy area, often used in environmental scanning.
  • “Open” approaches: citizen and stakeholder engagement in service and policy design. This can be online deliberation, argument mapping, citizen’s panels, facilitated sessions, roundtables, or dozens of other methods. See: People and Participation, Designing Public Participation, and Dialogue by Design.
  • Participatory budgeting: overlaps with citizen engagement, but actually sets aside portions of government budgets for the citizens to decide on. Usually comes with a lot of work to create a fair and inclusive process, including web platforms to help people explore, debate, and vote on options, and to consider trade-offs and competing views.
  • Crowdsourcing: overlaps with citizen engagement, but let’s think of crowdsourcing as aiming for light inputs (ideas, concerns, suggestions, edits, votes) from many people. See: Crowdsourcing as a new tool.
  • Citizen science: creating platforms (toolkits, web platforms, games, physical infrastructure) that allow citizen inputs to government data collection: e.g., water and pH levels, photographs from standardized perspectives, star field mapping, protein folding. Here’s a cool example: Water Rangers.
  • Open data: releasing data created and collected by government to allow for third-party uses: social, economic, and academic research; platforms for access to government services and data;  business intelligence, etc.
  • Hackathons: collaborative problem-solving sessions, typically with technological solutions (but not always), that bring people together to define a problem then prototype and test minimally viable solutions, usually within 48 hours.  I helped organize and run this one.
  • Behavioural insights: generating and testing hypotheses from the behavioural psychology and behavioural economics literature to examine and optimize citizens’ interactions with governments (e.g., different language on letters from tax organizations leads to different response rates and times). Often paired with A/B testing two products/approaches to get authoritative data on which worked better. Worth it to skip straight past articles to the classic book on the topic.
  • Impact-based delivery models: can be partnership models, procurement, grants and contributions, or other funding models. Governments are increasingly exploring ways to get away from defining strict requirements for contracts, partnerships, or products up front and instead dispensing funds based on measurable impact with the approach left up to the third party. E.g,. the UK Social Value Act, social finance, pay-for-performance, or many public-private partnership governance models.
  • Challenge prizes: Posting monetary prizes for hard-to-solve problems that can be attempted by any individual or group. Groups attempt to solve problems on their own volition, with no promise of financial compensation. Often looking for technological or research proofs-of-concept, which governments can then purchase or pursue. Challenge.gov and the X-prize are the common examples.

That said: thinking in the above terms can be a trap. The process of deciding on action can often be boring. Good problem definition and ideas analysis can lead to re-using old solutions, or tweaking them only a little, and this has to be okay.

If a government policy team spends time making observations and conducting exit interviews with citizens using a particular service (an activity that could be equally called citizen engagement, behavioural insights, or design thinking) and makes minor policy changes based on that insight, is that innovation? If it creates value and they can scale it, sure.
On the flip side, for any private sector leap forward or emerging technology, there’s always someone waiting to say “How might we use [artificial intelligence/big data/data analytics/platform models/driverless cars/virtual reality/quantum computing] in government?” It’s worth musing about, but will only be worthwhile when that thinking happens to connect with a genuine problem - in a real-world program, policy, or service area - that needs solving.

Which means that none of the above approaches can be used in a vacuum: they need the connection to mandates, and the people who can understand and implement them.  It’s tempting to think that any of the above might be the perfect approach in a given situation, but unless the problem owner can and should deliver on them, it isn’t. That governments exist to deliver on mandates from elected officials is bizarrely easily forgettable.

It’s worth noting that many of the listed approaches are, in essence, just different ways to gain information and insight into stakeholders’ needs - that is, they can be routes to better problem definition and different ideas. They are ways to understand, and hopefully manage, complexity.

Internal innovation campaigns

On that note, a quick thought on internal innovation campaigns: that is, asking employees for ideas on how to improve the workplace or the organization's outputs. This is where there’s more control, less risk, and more room to play. However, if you open the floodgates and receive 1,000 ideas, you probably have to reject 995+ of them. You only have so much time and executive willpower, and some ideas will just be unfeasible. Consider only asking for ideas when management is genuinely ready to make change, and consider scoping out a theme area.

Many ideas will be predictable: Google's 20% time, skip-level meetings, development opportunities, 360 degree reviews, use [technology] for [activity], bring-your-own-device, buy [technology] for employees, telework, etc. If you’re interested in porting an established process into your organization, consider instead engaging employees on the details and implementation.

Some companies take internal innovation campaigns very seriously, and show more promise than my skeptical take. See: Why governments would never deploy Adobe’s Kickbox and why maybe they should.

Design thinking

Design thinking gets its own section, because if you solve for design thinking, you often end up solving for much of the above. It’s a structured process that helps organizations work through the problem definition, their own capacity, and the right approach - which might be innovative, or it might be boring.

These five steps are at the core of design thinking:

Source: IDEO

  1. Empathize: understand stakeholders and their needs, mindsets, challenges, and attitudes. Common techniques include interviews, observing people interacting with services, or group dialogue models. People can’t comprehend how powerful this is - and how much they were assuming - until they actually do this themselves.
  2. Define: once again, a focus on defining your problem really, really well.
  3. Ideate: generate ideas for how to solve the problem. Allowing different stakeholders into this process, having different perspectives mix, and using facilitation techniques to create space for creative thinking are all proven to generate more novel ideas than simply asking.
  4. Prototype: make anything (e.g., paper mock-ups of service interactions, lego buildings, rough websites, draft policies) that people can touch, explore, and react to. The act of building will help problem owners and idea generators understand how something will look, feel, and work in practice...
  5. ...but not as much as testing it will. Ideas from the previous stage will invariably be laden with assumptions, and when users - real, honest-to-goodness end users - start interacting with even a rough design, assumptions will be revealed, which will create opportunities to correct them. A sample size of five testers will reveal most of the critical failure points in, for example, a web interaction.

Design is also a discipline to practice, not a skill or an easily replicable set of steps. There are many, many ways to get the social science wrong at the empathize stage, generating false results, if you don’t know what you’re doing. Work with experienced designers while building your own capacity.

A related model is the “double diamond”, below. It describes the process of opening the conversation up to allow many considerations into the problem space, looking at many possible stakeholders and issues, before narrowing the focus down into a tight problem definition. Once that is done, the conversation opens back up to allow many possible solutions to surface, before deciding on one or some and taking steps to define and implement them.

Source: Rachel Reynard

Organizational models and strategies

It’s very different to talk about innovation in a localized, contextualized way (e.g., within a policy/program/service unit) and in an organizational, cross-government way. Individual units will innovate constantly, and the efforts may never bubble up to the government-wide radar. In many cases, the change won’t be called “innovative” - it’ll be born out of incremental change, financial pressures, or an opportunity to simply improve the program and it’ll just get called “better.”

The classic piece on innovation in private sector organizations found significant differences depending on where “innovation units” were positioned within the organizational structure. The long story short is that the wrong design and governance decisions can nearly guarantee failure.

However, organizational-level strategies can support individual, contextualized innovations in a few ways, including:

  1. Identifying and removing barriers
  2. Creating safe space for experimentation
  3. Providing training, tools, and resources

Barriers to new approaches

Get used to the idea of problem DNA. Every barrier and obstacle isn’t a homogenous element, it’s a unique combination of policy, risk, culture, understanding, time constraints, opportunity costs, process, communications, and other building blocks. Solving for one element rarely solves the problem.

Let’s return to the idea that, with new options for policy development and implementation, there’s a greater chance to match the right approach to the right situation. However, rules, policies, and processes were often designed before these options became available, so there are often barriers to their use, including hiring, mobility, contracting, procurement, and training. Creating flexibilities or exception processes in these systems is the blanket, one-size-fits-all approach to freeing up space to innovate. However, systematically taking steps to understand, and adjust for, the impact of these forces on how employees work is more effective - albeit more time-consuming and it requires more management commitment.

Safe spaces and innovation labs

An increasingly common approach is to create “innovation labs,” which are theoretically safe spaces that can bring people together to define problems and experiment with solutions.

For a proper, peer-reviewed definition:

“An innovation lab is a semi-autonomous organization that engages diverse participants—on a long-term basis—in open collaboration for the purpose of creating, elaborating, and prototyping radical solutions to pre-identified systemic challenges.”

These can be within organizations, at arms-length, or purely external as partners. They tend to be centres of expertise for design thinking, prototyping, facilitation, and other useful process skills. It is not that they are full of smart people (though they tend to be), it is that they are full of people with toolkits for organizational learning and for helping groups of people reveal and explore their collective wisdom and knowledge.

Labs tend to partner with business units once a need for change or exploration is identified. They essentially act as a combination of host, consultant, and partner for a policy/program/service development, implementation, or evaluation journey.

In governments, they tend to have dotted lines to senior executives to help create staffing, spending, and governance exemptions to allow for more agile, free-flowing operations. Which is important, as partnerships and collaboration within and outside the organization tend to define such labs.

NESTA has an innovation lab, MindLab is one of the longest-standing examples, and the MaRS Solutions Lab in Toronto has a helpful article on Social innovation labs: Top tips and common pitfalls. (There are many others, in and out of government. E.g., Alberta's CoLab, New Brunswick's NouLAB.)

Anecdotally, working with external labs (i.e., those outside of government) appears more likely to allow for the expertise, safe space, and transparency required. Expect that work with such labs will take a lot of time. The good news is it’s the appropriate amount of time for meaningful change, and the simpler solutions you would have arrived at through a half-hearted exploration would be incomplete, misleading, and ultimately less effective.

Central hubs, expertise, and resources

Organization-wide innovation strategies can also take steps to ensure the on-demand availability of training (e.g., on citizen engagement, facilitation, or data science), resources (including people and money), or expertise and advice.

Many skills central to the common toolkit of approaches are disciplines in and of themselves. They’re too specialized to fit into many permanent teams, and they’re called on at irregular intervals. For such skills, governments are creating central centres of expertise to house permanent specialist staff, who act as common resources to the rest of government: e.g., public engagement offices or behavioural insights units. The Government of Canada has a hub for a variety of approaches. They might operate on cost recovery or as free resources but with criteria for choosing the most high-impact projects to work with. If such specialists are creating value above and beyond their cost, capacity can be added over time (e..g, the UK Nudge unit).

Communities of practices are also useful for building capacity for skills for which external professional networks are few and far between. Ideally, managing these communities should be part of someone’s job, if not their full-time job. It creates connective tissue between experiments and pockets of knowledge scattered across organizations, which is particularly important for evolving, emerging fields of practice.

On levers

What levers for change do you have? Is your organization willing to make substantial changes to, for example, HR or contracting policy for the sake of an innovation agenda? It’s helpful to think through two lenses:

  1. What’s the best we can possibly do, given our current parameters?
  2. What possibilities would be available with certain strategic structural changes?

Levers for change may include:

  • Laws, policies, and regulations
  • Performance management frameworks
  • Executive/political commitment and champions
  • Internal and external communications
  • Training
  • Toolkits and resources
  • Hiring (or alternative models like short-term tours, fellowships, exchanges)
  • Procurement
  • Contracting
  • Organizational design, including the labs and hubs from above
  • Changes to processes (e.g., mandatory fields in spending proposals)
  • Common-use programs (e.g., centrally-led open data portals, citizen engagement tools, challenge prize platforms, web platforms that any department can use)

What to do next


    1. Make sure the most important stakeholders in your organization are talking about the same thing when they say “innovation.”
    2. Explore the current barriers, and don’t settle for surface-level answers (e.g., “risk aversion,” the most common, is a symptom, not a cause: what’s behind it?).
      1. Risk symptoms are most easily solved by 1) crisis or scrutiny that suddenly makes novel approaches more palatable than a publicly failing status quo, or 2) commitment and willpower, most often stemming from the political layer. For smaller-scale risks, commitment and willpower from senior executives can suffice. That is, they have to personally evangelize and clear obstacles for the change. Repeatedly and consistently.
    3. Agree on parameters: resources, level of commitment, available levers for change.
    4. Identify, or more likely enlist or partner with, people with deep expertise.
      1. Take a moment to consider how your organization knows expertise when it sees it, particularly for new, rapidly evolving, un-professionalized skills (e.g., there are no equivalents to Certified Accountants for public sector innovation).
      2. Find a role for the people in-house who want to be a part, and make sure it is neither above their heads nor meaningless (considering oneself innovative doesn’t mean one can lead this work; however, anyone who’s been pushing for these approaches will feel left out if external expertise is parachuted in).
    5. Create criteria for project intake and create ways for people to find you.
      1. There’s danger here, commonly referred to as “solutions looking for problems.” Ideally the innovation model is that everyone is focusing on their mandates, but with a light layer of constant learning, environmental awareness, and knowledge of the organization-wide innovation capacity for when it’s needed and would solve the problem better.
    6. Read and talk to people: there’s tons of great resources out there. The key is to learn how to find credible sources that best match your current context, and to triangulate between a few for each concept. Supplement with rapid evidence assessments.
    7. Write the future in pencil: experiment at both the project level and the meta, innovation-in-government level, learn, and change. If your “innovative project” has to be a success - that is, a project success, not an organizational learning success -  you’ve sold it wrong and your organization is thinking about it wrong.

    Reading

    Wednesday, 15 July 2015

    A Government that Learns by Design


    by Melissa TullioRSS / cpsrenewalFacebook / cpsrenewaltwitter / creativegov

    In an earlier blog post, I talked about values and our social contract with people (see: Open Gov, Values, and the Social Contract). I touched briefly on an idea that Nesta posted on their blog about cognitive government. Nesta posed an interesting challenge in that post:
    How can governments shorten their learning curve to more effectively adapt to the technological changes that surround them?
    It will come as no surprise to people who have worked with me, or even if you've read a couple of my posts here on this blog, that I think we need to begin with examining what values we support and demonstrate inside government in order to tackle Nesta's challenge. An assumption embedded in their challenge is that government has the kind of learning culture to support adaptation, so I think we need to dig deeper.

    Based on my seven years exploring the government culture, an assumption I've been able to test (and confirm) is that innovation and creativity are things that governments don't inherently value (they usually require you to fill out a business case template to consider such things <tongue-in-cheek>). So I have another challenge I've been thinking about related to cognitive government.
    How might gov support a learning culture that allows for experimentation and creative ways of approaching the way we do our work?
    My hypothesis is that a government that learns by its very design, and values creative mindsets, will be a government that adapts better to technological changes (and other shifting patterns and citizen expectations). My big idea would be to create opportunities in government, from the inside-out, to think and act like a lab.

    Gov that Thinks and Acts like a Lab

    Design thinking principles, from d.school's bootcamp bootleg
    Because I've drunk the design thinking Kool-aid, so to speak, naturally, I believe that that's the approach we need to start applying in order to change the way we build and deliver government programs. There's a lot of lab and lab-like work going on across the globe to start transforming how government delivers services and programs to citizens.

    In Nesta's 2014 report on i-teams — units that are established inside government to enable innovation in service design — successful teams have a few things in common with design thinking principles. One of the ten principles identified that i-teams possess is to "have a bias towards action and aim for rapid experimentation."

    This is where you tell me "it'll never happen." Government isn't a fast-moving animal; it's the sloth1 of the services kingdom. But I'd argue that it's not about speed of overall change, or about being the first to play with the latest shiny object; it's about figuring out quickly how it might add value to existing processes.

    The "figuring out quickly" part is where we can use (that is, demonstrate that we value) creativity — it's where, inside government, we should find ways to experiment, learn, and use those lessons to feed into existing programs and processes. And for that, you need people who understand the system, and are able to learn and apply some creative thinking.

    People who Think and Act like a Lab

    Any organization is as nimble and adaptable as the people it fosters. A learning culture supports experimentation; a culture that values creative mindsets is where innovative people are driven, and where they thrive. If you've followed me this far in the post and agree that gov should think and act more like a lab, we need to start thinking about how to attract and keep people who value innovation and creativity.

    What would move us closer to adopting a culture that supports i-teams? What barriers are in the way, and how do we start getting around them or removing them completely? And if we can carve out a tiny space for them somewhere, what experiments might we run, on a small scale and in much needed parts of government (e.g., procurement? IT? Budget?), to test my hypothesis above?

    This, of course, is unfair to sloths. I saw a little guy on Meet the Sloths come back from having a leg amputated, and he re-learned how to climb within a day after his bandages were removed. But the sanctuary does support a learning culture (and experimentation), so that might have been part of the reason for his success.

    Tuesday, 12 August 2014

    The New Nature of Process


    by Kent AitkenRSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken


    In a recent report on the next frontier of digital technology, Accenture created a model of the long history of challenges that have faced management.

    http://www.accenture.com/us-en/Pages/insight-looking-digital-being-digital-impact-technology-future-work.aspx 


    In short: the industrial era was characterized by a transition from individual craftsman and artisans to large-scale processes, and this transition was enabled by repeatability. A worker didn’t need to know how the factory ran to screw part A into part B, and if he left, a replacement could be trained incredibly quickly. This was the age of Taylorism, of precision and measurability permitted by process and structure.
     
    Throughout the last century we’ve transitioned into an economy far more based on knowledge work (see Deloitte’s assessment, below), which meant the industrial management style ran into a crisis of rigidity, the solution to which was adaptive processes. Judgment, discretion, if-then statements, case management.

    http://dupress.com/articles/the-future-of-the-federal-workforce/ 

    However, for senior executives ultimately managing a variety of adaptive processes, the problem then became one of complexity. There’s too much going on, it’s too hard to understand, and the performance reports that are so useful for widgets-per-second are far less revealing.
     
    Accenture suggests that the solution to complexity is in digital. Specifically, “smart digital processes”, which would feed decision makers key information exactly when they need it. My response is: maybe? In some cases? It seems the more plausible answer is a return to process - which is happening all around us, albeit which a crucial difference from the Taylorism of old.
     
     
    Process in the Knowledge Economy
     
    There's a common thread among the emerging approaches to governance. In his equation for today’s public policy, Nick highlighted several, including design thinking, behavioural economics, and public sentiment. We could add the field of facilitation, the practice of public participation, and innovation labs to them mix. All of which are hugely reliant on defined processes. 
     
    The key difference is that the interim goal of the process of old was to remove the need for learning, whereas the process of today is designed to maximize the speed of learning. At the end of this post there are some links to example process kits: if-then guides to, essentially, helping humans understand other humans and the systems they live in.

    The end goal is the same: scalability and repeatability. In this case, it’s repeatably, reliably solving unpredictable, emerging, or complex problems. We’re on the same arc as the first graph, but for a completely different organizational paradigm.
     
    So the challenge for management becomes a new, grander problem of complexity. Where executives have been struggling to manage adaptive processes via industrial-inspired organizational designs, they’re going to be overwhelmed by managing a variety of learning processes without significant changes in management style. In some cases the if-then flow will be impossibly complicated, and in others it’ll need to be thrown out the window. A single node in a hierarchy will never be able to understand each process, only the principles behind them.


    What's in it for Us?
     
    We need to do it. It’s where the performance gains in a complex environment will come from. I’ll exapt an HBR article about how our personal learning curves regularly plateau. Here’s the graph, with learning on the Y axis and time on the X axis:

    http://blogs.hbr.org/2012/09/throw-your-life-a-curve/ 

    Success comes from knowing when to jump to the next learning curve, which is incredibly hard at the outset but maximizes the speed of progress.

    Embracing this learning curve will be cost-effective in two ways. First, there’s evidence that consensus-building through learning processes costs less in the long term than making and defending decisions (which will apply to both internal management and policy/program governance). Second, in the latter part of that learning curve we’ll reach a level of sophistication that allows economies of scale:
    • We’ll be able to reliably pull from a menu of processes and adjust to new situations, rather than starting near scratch every time
    • We’ll be able to recognize when we can leave these learning processes to citizens, businesses, and NGOS, and govern accordingly
    • We’ll be able to share and teach approaches broadly
    Returning to Accenture’s claim, organizations have run into a problem of complexity. Particularly for governments, however, I don’t buy their claim that the answer is in smart digital. Instead, I think we have to recognize that in many ways we’re back at the beginning, worried about about scale and repeatable processes. Just very different processes.
     


    Example process kits:

    http://www.involve.org.uk/blog/2005/12/12/people-and-participation/
    http://labcraft.co/
    http://stepupbc.ca/explore-your-career-increase-collaboration/idea-navigators
    http://www.mindtools.com

    Friday, 21 March 2014

    More thoughts on the Copernicus formula

    by Nick CharneyRSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Nick Charneytwitter / nickcharneygovloop / nickcharneyGoogle+ / nickcharney

    A while back I presented a model demonstrating what I consider to be the future of public policy (See: Blending Sentiment, Data Analytics, Design Thinking, and Behavioural Economics). Kent later observed that the model could in fact describe the more encompassing idea of governance writ large (See: Building Distributed Capacity). At first I agreed with his observation but it's something I've been quietly reflecting on a lot lately and the more I think about it, the more I get the sense that what I've put forward is more precisely a formula that informs governance. Or perhaps more rightly, could inform a particular way of "doing" governance, because governance is – as Kent himself recently noted (See: People Act, Technology Helps) – what people do.

    Recapping Copernicus

    If you didn't catch the original post (again, see: Blending Sentiment, Data Analytics, Design Thinking, and Behavioural Economics) here's the TL;DR recap of the formula:

    (Public Sentiment + Data Analytics) / (Design Thinking + Behavioural Economics) = Future of Evidence Based Policy

    It's a back-to-basics model that argues that the sum of what the public wants (sentiment) and what the evidence suggests is possible (data) is best achieved through policy interventions that are highly contextualized and can be empirically tested, tweaked, and maximized (design thinking + behavioural economics) while simultaneously creating new data to support or refute it and facing real-time and constantly shifting public scrutiny.

    Naming Copernicus

    I chose to name the formula Copernicus for the following reasons:
    • it speaks to the fact that the formula represents a significant reorientation in the field of policy development and execution; 
    • it infers the amount of effort that will be required to overcome the inertia that is inherent in current frame of reference; and
    • it conveys the sense that once the formula becomes the new frame of reference the old frame is no longer tenable.
    You may have noticed that I sense "once the formula comes the new frame" and not "if the formula becomes the new frame"; I did so subconsciously, noticed, paused, reflected, and kept it as is because my gut feeling is that it is only a matter of time before the formula's elements become as ubiquitous as the social media that we used to talk about in similar veins.

    Copernicus is a means

    It's a frame that helps you lean into the hard work of figuring out the variables. What do people want? What does the evidence suggest is possible?

    It's a frame that helps you lean even further into the harder work of structuring the execution. What policy levers are most likely to work? How do you design the interaction? How do you build adaptability into the prototype?

    It's a frame that helps decision makers gather rich information points and brings them to a series of decision points.

    Copernicus is not an end

    What I'm trying to get at is the fact that the formula isn't a panacea of simplification but a lens through which to better understand complexity. It doesn't tell you how to weigh the variables against one another, or what choice(s) to make, but rather it helps identify that which you ought to consider when doing so.

    To be honest, I was planning on writing a series of posts elaborating each of the formula's elements but every time I sit down to do so I get lost in the complexity of each of them. In short, I'm still learning, thinking them through, running them up against real world examples. I still plan on doing so, but I need to dedicate more time to think it all through.

    To this end, I'm considering convening a small discussion to test the model against recent policy choices made by different organizations (e.g. Canada Post' decision to end home delivery) to see precisely how it could help me both understand and explain a policy choice if I was in the position to make one. If this is a thought exercise that you are interested in participating in, drop me a line, I'd be happy to run through it with you as a thought exercise.

    Friday, 31 January 2014

    Redefining diversity in the search for ideas

    by Tariq PirachaRSS / cpsrenewalFacebook / cpsrenewaltwitter / tariqpiracha

    With Nick and Kent’s recent focus on a new way forward for policy development, I began thinking about the sourcing of design thinking from Nick’s piece, and the system for more reliable problem solving from Kent's piece. If we are to assume that policy making is to take this proposed course, who is the source of the design thinking? Where do the ideas come from?
    When I first joined the public service in 2001, I remember attending a Town Hall related to visible minorities and diversity in the public service. The question back then was not all that different from questions we see today: how to we attract and retain *diverse* talent for the public service?

    The assumption is that there is value in a diversity of backgrounds and what those backgrounds can bring to the table. The focus for visible minority groups, of course, is to provide more opportunity for those who are otherwise shut out of the process due to the colour of their skin. 

    And they weren't the only groups who focused on representation based on a particular demographic. Our meetings or consultations became an exercise in ensuring that we included a visible minority, a woman, a person with a disability, a member of First Nations, someone younger, someone older – the list goes on. A successful exercise in inclusiveness was a room filled with demographic diversity. 

    However, if the nature of policy development is going to change (as Nick and Kent suggest), then it follows that a redefinition of diversity may be required. 

    As Blueprint 2020 uses digital tools to bring public servants together across the country to envision a new future for the public service, the focus is on ideas. In short, government may need to shift from visible to invisible inclusiveness.  

    A good idea is a good idea is a good idea

    When I say “invisible inclusiveness”, I'm talking about a focus on the source of design thinking: tapping into a diversity of experience, opinion, and ideas.

    It’s the notion that cognitive authority needs to supplant institutional or positional authority. We should be drawn to good ideas and we lose out if we retain biases of any kind that favour source (one's position or influence) over merit (of an idea).

    It means a good idea is a good idea, no matter where it comes from and is treated as such. It means that there is value seeking out other voices and ideas (even dissenting ones). It is a belief in producing better outcomes through consultation, collaboration and cooperation. It is an acknowledgement that no one person has all the answers. 

    The opportunity here is that one's value is determined by their contribution, not by the colour of their skin, the year they were born, or the nation to which they belong.

    It’s asking for a change in culture, which is no easy task - there is still a long way to go. Collaboration and the use of collaborative tools, while increasing in use, are by no means pervasive in the public service. And the use of these tools and the culture that may be fostered through the use of these tools are no guarantee that racism, discrimination and exclusivity are going to disappear. 

    However, if better policy outcomes and an effective public service are the goals for the public service, then the focus on finding the best ideas will truly need to shift from obsessing over who is getting a seat at the table to stepping out of the boardroom.