What do policymakers want?

Originally published by Enrique Mendizabal in May 2014

[Editor’s note: this post has been edited to include a recent study from Australia.]

Three recent reports on the views of policymakers have been making the rounds in influential blogs and online spaces.

From the United States

The first is a report on what White House policymakers in the US want from researchers that first came to my attention via Duncan Green’s blog. The paper: What Do Policymakers Want From Us?, by Paul Avey and Michael Desch, found, interestingly that:

policymakers do regularly follow academic social science research and scholarship on national security affairs hoping to draw upon its substantive expertise. But put into question the direct relevance of the most scientific approaches to international relations.

And, to the surprise of many, their fundings challenge:

the “trickle down” theory that basic social science research eventually influences policymakers.

The authors try to answer a question that many think tanks have asked themselves (but rarely actually ask policymakers);

What, precisely, do the most senior national security policymakers want from international relations scholars?

They arrive at a number of interesting findings. Some are well known by now but other, I think, are worth highlighting:

Theory is not a bad word: I remember, before “Theory of Change” became all the rage, being asked by DFID to remove the word “theory” (in relation to brief mention to theories of political change) from a 15 page How to Note on how to plan and monitor policy influencing strategies (I was also asked to cut the note as it was too long).  DFID and other agencies have employed lots of ‘knowledge brokers’ to digest theory for policymakers and protect them from the hard world of academic research. The problem, however, aren’t the theories or the complex ideas that researchers often deal with. The problem is jargon!

While policymakers do use theory they are skeptical of much of academic social science which they see as jargon-ridden and overly focused on technique, at the expense of substantive findings.

Smart policymakers are not researchers best friends (at least not for those who like to claim impact): A few months ago, Philipe Martin wrote about an interesting situation among the think tanks of the TTI. Apparently, the policymakers that knew the think tanks better also distrusted them more. In other words, the more one knows about something the more one is skeptical or critical about those who claim to know about it, too.

Not surprisingly, rank in government is often negatively associated with tolerance for sophisticated methods; more striking, in our view, is that level of education also has that same negative correlation, indicating that it is those most familiar with those theories and techniques who are most skeptical of them.

It is about people -and ideas, but mostly people: In the world of intelligence services the most important contributions to policy/decisions happens not via ‘briefs’ or ‘documents’ but conversations. From that post:

At the National Intelligence Council, I came to think that, for all the technology, strategic analysis was best done in person. I came to think that our real product weren’t those papers, the NIEs (National Intelligence Estimates). Rather they were the NIOs, the National Intelligence Officers –the experts, not the papers.

The paper on the White House agrees:

Finally, policymakers believe that the most important contributions scholars can make are not as direct policy participants or trainers of aspiring government employees, but rather as informal advisors or creators of new knowledge. However, severe time constraints limit their ability to use such scholarship in any but its’ very briefest presentation.

From Britain

The second paper comes from Britain and deal with the role that the media, especially printed newspapers, play in reaching and influencing policymakers. Earlier this year I organised an event in Lima in which we asked a journalist, a politician, and a researcher how they preferred to access information. The political panelist was very clear in the role that the media played in shaping his agenda -even his agenda for the day.

This paper by Colin Talbot and Carole Talbot at Manchester University: Sir Humphrey and the professors: What does Whitehall want from academics? Makes this very same point.

The findings include the following:

How do they access expertise?

Screen Shot 2014-04-30 at 17.19.33

Most interestingly, though, policymakers where asked about what disciplines they found more useful. Not surprisingly, the findings show that more applied or applicable disciplines came on top. Many think tanks tend to organise themselves along disciplines (economics, law, social sciences, etc.) and therefore fail to capture policymakers’ own preferences:

Screen Shot 2014-04-30 at 17.24.09

Finally (there are many more interesting findings), the survey asks whether specific research or practical expertise is more important for policymakers. The answer suggests that researches need to get involved in the practice of their fields of study -even more than they worry about communicating their research findings:

Screen Shot 2014-04-30 at 17.26.34

From Australia

The third paper is one recently published via the LSE Impact of Social Sciences blogAustralian survey indicates policy-makers still have major reservations about assigning priority to academic research by  Michele FergusonBrian HeadAdrian Cherney and Paul Boreham.  In fact, the Australian report is a whole project that includes several papers that can be found here: Publications from: Are Australian policymakers interested in social science research?

Among their publications is included a survey:

the responses from the survey indicated that academic research, while valued and considered relevant, is not being used by the majority of staff in policy decision-making. Policy staff at both federal and state levels indicated major reservations about assigning high priority to academic research in their policy development work and only 16 percent of respondents reported that university research results have regularly influenced changes in policies developed by their unit.

But most interesting is the difference in the perception of influence between policymakers and researchers. This table speaks for it self:

Academics certainly think quite highly of themselves. My own experience working with think tanks in developing countries would tend to confirm this. The thousands of case studies that research funders like to commission (and think tanks like to produce) overestimate the role that research plays on policymaking.

The Australian researchers provide an excellent list of key results that are worth publishing here:

Policy-makers

94% – Indicated that the internet is an important means of obtaining research information82% – Reported difficulties in accessing full-text versions of academic articles and reports

81% – Agreed that the policy-making process is driven by budgetary considerations

71% – Perceived that urgent day-to-day issues take precedence over “long-term” thinking

71% – When using academic research, consider it a high priority that the research findings are unbiased

61% – Reported that in the last 12 months they had written a policy-related document that drew on academic research

Academic researchers

84– Agreed that academic reward systems do not adequately recognise dissemination of work to non-academic end-users81– Indicated that the amount of time needed to invest in coordinating the work between different partners in research collaborations is potentially problematic

80% – Find there are different research orientations between academics and external partners

77% – Agreed that research partnerships have provided them with opportunities for their research to have an impact on policy and practice

73– Agreed that the academic requirement to publish in peer-reviewed journals inhibits a focus on policy and practitioner audiences

All papers are worth reading in detail. There have been previous efforts in international development think tanks to undertake similar surveys. Unfortunately, they have never been large enough nor sufficiently robust to yield much more than anecdotal evidence.

The Australian study in particular converse a number of angles and issues. It also provides excellent recommendations (published in the LSE blog).

It is quite telling that they are:

  1. Common sense things: 1) to be influential, academic research must be accesible, 2) all actors must take the time and effort to build and maintain relationships, 3) choose the best channels and tools and be consistent, 4) create opportunities to bring policymakers and academics together.
  2. They should be part of the day to day job of research centres by now. I am amazed at funders still hiring consultants and ‘encouraging’ their grantees to make their research more accesible, or build relationships with the media or with policymakers. These should have been incorporated into think tanks’ and even more academically leaning research centres’ core mandates.
Advertisements

The mechanisms and markers of research quality for Think Tanks

Originally published in Research to Action in February 2015.

Adriana Arellano, is the Research Director at GrupoFARO, an Ecuadorian, independent, plural, non-partisan, secular think tank. This post has been adapted from an original contribution to the TTI Exchange e-forum.

Research quality is a key concern and indicator for think tanks, one we keep enriching and trying to measure. The concept of research quality is made up of a mix of characteristics and recognised elements, including:

  • Application of sound methods
  • A comprehensive review of relevant literature
  • Evidenced-based conclusions
  • Consideration of limitations, making them explicit and avoiding bias
  • Use of high quality data sets
  • Relevance
  • Timeliness
  • Usefulness of the research products

Think tanks regularly measure or assess the quality of their research efforts through different mechanisms and quality markers:

1. Intra-organisation review processes and external peer review processes

2. Quality perception surveys

3. Number of indexed articles published

4. Research uptake by policy makers

  1. Review processes: The first two mechanisms work not only as a means to measure but also to improve quality. Review processes can help a think tank gather feedback on flaws and observations on the quality of a publication.
  2. Quality perception surveys: These are useful mechanisms used to assess the quality of products generated by a think tank. They provide an opportunity to understand stakeholders’ perceptions considering broader aspects of quality that include the relevance of research, timeliness and usefulness to the policy process.
  3. Number of indexed articles published: This quality marker is often used by donors and think tanks to measure research quality. However, it is complicated to concentrate on this indicator, as it is more suited to the work of universities, who have a stronger focus on reaching an academic audience, rather than that of think tanks who work to reach a variety of audiences beyond academic journal readers[*].
  4. Research uptake by policy makers: This indicator is difficult to assess as the policy making process is too complex. Policy development can take a lot of time, and depending on the context, in some cases think tanks will want to first raise awareness on an issue before building the grounds for a debate or the proposal of a specific policy. At other times, the process of research uptake can be indirect, through constituents or media representatives. One way to more closely evaluate this marker would be to measure the dissemination of research products with appropriate audiences and assess the levels of influence these have on the relevant stage of the policy-making cycle depending on the context.

When differentiating our organisations and our efforts from those of other research producers, like universities, the definition of research quality and its markers requires particular attention. The concepts of Modes of Knowledge Production (Gibbons 1998) may be useful for this purpose, with most universities falling under Mode 1 of knowledge production and most think tanks operating under Mode 2:

Screen Shot 2015-02-18 at 12.51.12
Many organisations may move from Mode 1 to Mode 2 depending on the type of work they are doing, their teams, the project they are working on and the issue they research. However, this framework can be used to better understand the realm of work for our organisations and help define more relevant quality markers and indicators for our research products.

[*] According to a recent study, half of academic papers are read only by their authors and journal editors,   http://www.smithsonianmag.com/smart-news/half-academic-studies-are-never-read-more-three-people-180950222/?no-ist

Communications and Impact Metrics for Think Tanks

Published in CIGI online in July 2013

This blog is based on a presentation made at the conference “Think Tanks – Facing the Changing World,” hosted by the Chinese Academy of Social Sciences in Beijing, June 17-18, 2013.

Today, many of the world’s 5,500 think tanks are seeking more effective ways to communicate, to increase their impact – and exploring better ways to measure that impact.

My views on these tasks are shaped by 35 years in communications, including in newspapers and news websites, as well as my work these past three years with an independent, non-partisan global think tank, The Centre for International Governance Innovation (CIGI).

For any organization, including think tanks, good communications begin with the creation of an overall strategic plan. This may seem obvious, but any enterprise is more likely to succeed with a clear mission and goals (many of us can identify cases where a muddy plan led to poor results). Mission is a definition of purpose. Goals define what success will look like: the desired impact. Tactics are the actions necessary to achieve those goals. It helps everyone in the organization if a strategy combining these elements in a logical fashion is written consultatively, then shared internally, so that each person can see how his or her work contributes to the overall plan.

A traditional view of think tanks is that their strategy requires them to conduct research and analysis to develop policy ideas, and then communicate their policy ideas both directly and indirectly. They can communicate directly, to policy makers who exercise power by making decisions. They can also communicate indirectly, to policy influencers, such as the media, scholars and citizens.

One challenge, however, is measuring the influence of think tanks, especially in the areas of policy impact, to assess whether the strategic plan was successful. The problem is one of attribution — who gets the credit for a policy that is implemented? Policy input comes from many places. Public or governmental policy development is a complex and iterative process in which policy ideas are researched, analyzed, discussed and refined — often through broad consultations with many stakeholders. When a policy is finally adopted, it may wear the fingerprints of many hands. For these reasons, a think tank cannot always claim success and say, “this policy was our idea.” In many cases, it would be highly unusual for a political leader to give credit to a particular think tank for a specific policy; such leaders must take ownership of their own policies, to be accountable for them.

In creating impact, a think tank can extend its role beyond that of conducting research, analyzing and identifying policy problems or sharing policy ideas. For example, think tanks also have the ability to convene meetings of different groups at conferences, seminars and workshops — to connect people and to facilitate dialogue. As conveners, think tanks have the ability to build bridges among diverse groups such as policy makers, non-governmental organizations, academics, business leaders and the media. In this way, think tanks can create a sort of “Track II” process — a catalytic role in which the think tank’s own influence is, once again, hard to measure. Think tanks may also have a role in education; through training programs, education and outreach, think tanks can help to develop the next generation of diplomats, bureaucrats and political leaders.

In communications, it is important for think tanks to reach the right people, with the right message, using the right method. Think tanks use a variety of communications channels — as different channels may be more effective with certain audiences. To reach top leaders, for example, an ineffective method would be to rely on academic-style research papers — because high-level leaders are busy and have little time to read.  The best method of outreach to senior leaders might be small meetings to present research findings in person – but this depends on having access to leaders, through a think tank’s network of people with excellent connections. Meanwhile, middle-level officials can be reached through multiple channels, such as conferences, workshops, papers and policy briefs (research papers might be 5,000 to 10,000 words, or more; but policy briefs are shorter documents of 1,000 to 1,500 words, which distill the key policy recommendations into a few concise findings or policy recommendations). Academics and scholars are more easily reached through well-written research papers and scholarly books. The wider public can best be reached through accessible websites and through the news media. For outreach through news media, think tanks must deploy skilled communications specialists to create and send news releases written in journalistic style, and who will follow up personally with journalists with whom they have developed relationships through regular contact. Other channels of communications include social media, newsletters (including email newsletters) and annual reports — each suitable to a particular audience. Good communication plans use a combination of all of these channels to achieve the greatest impact.

The “Cycle of Impact” for a think tank has three phases. The first phase is to Plan. Researchers within think tanks consult with policy makers to better understand the challenges and issues those policy makers are facing; they design projects to address those topics, and the design includes an allocation of resources, budgets, staff and timelines. The second phase is to Engage. The think tank may engage in deep research and analysis of the topic, including the historical context and policy options; it may also convene conferences and public or private meetings as necessary; and it may communicate its findings through publications, websites and social media. The final step is to Measure. The think tank may track the quantity of outputs in publications, media mentions, website traffic and social media hits; it may evaluate the quality of the outputs (even if this is a subjective judgment) and it may even try to assess the actual impact on public policies (although this raises the difficulty of attribution, as discussed earlier); and it may report on these measurements to stakeholders, such as funders of the think tank. The third phase is the easiest to overlook, but measuring outcomes can yield valuable lessons to help a think tank improve its work.

We can think of many things to measure at a think tank. What follows is a list of 15 possible metrics, as suggested by various experts on think tanks — and unfortunately, the more useful ones to consider may also be the hardest to measure in exact numbers. These metrics can be grouped, with the first five metrics being measures of Exposure, based on an assumption that more influential think tanks are more exposed to public view.

  1. Media mentions: These are citations of the think tank, by name, in media such as newspapers and news websites. Some third-party services can be hired to measure citations, or think-tank staff can search the Web with Internet search engines. Online searches are imperfect, however; they may not capture references that occur in traditional print only, or on television or radio; and they may miss citations behind pay walls or other security measures.
  2. Number and type of publications. This is strictly a quantitative measure of the think tank’s publications, and does not evaluate the actual content of the publications as being of a high quality or not.
  3. Scholarly citations. These include citations of the think tank’s work in academic journals.
  4. Government citations. These include citations of the think tank’s work in government meetings or official party proceedings.
  5. Think tank ratings. How did the think tank fare in annual ratings, such as those produced by the University of Pennsylvania? Some critics see such rankings as mere popularity votes, based on perceptions only, with methodologies that do not take into account different structures, funders, missions or other characteristics of think tanks. Nevertheless, the ratings do garner considerable attention.

The next group of metrics looks at Resources, based on the assumption that more resources allow a think tank to exercise more clout and, hence, achieve more influence.

  1. Quality, diversity and stability of funding. The source of its money may reflect on a think tank’s independence, support and connections.
  2. Number, experience, skills, reputation of experts, analysts and researchers. It’s easy to count heads, but reputation is a subjective quality and harder to measure.
  3. Quality and extent of networks and partnerships. Influence is not just a question of who you are, but who you know.

The next group of metrics concerns Demand — that is, does anyone actually want to see or hear from a particular think tank?

  1. Events. The number of conferences, lectures and workshops, and the number of attendees (both of these are a simple quantifiable measure). Harder to measure is the quality of the attendees. Are we just filling the room or are we attracting influential opinion leaders, powerful policy makers and top-level experts?
  2. Digital traffic and engagement. Number of website visitors, page views, time spent on pages, “likes” or followers.
  3. Official access. Number of consultations with officials, as requested by the officials themselves.
  4. Publications sold or downloaded from websites. This is not the measure of output, but rather the external “pull” on the publications.

The final group of metrics considers Policy Impact and Quality of Work. These may be the most important things to measure, but also are among the most difficult to quantify.

  1. Policy recommendations considered or actually adopted. As discussed previously, this is a problem of attribution. A think tank may say it put forward an idea, but if others had the same idea, who gets the credit if a policy is implemented?
  2. Testimonials. Praise, criticism or other assessments of a think tank’s work can be collected through interviews with policy makers or recognized experts; this work can be done by external, independent evaluators, reporting to the think tank’s board or funders. As well, opinions about the think tank can be collected through formal surveys of the organization’s event attendees or subscribers to its newsletters and publications.
  3. Quality of the think tank’s work. This is the most subjective of all metrics, but criteria for quality can be developed and defined, and placed on scales (such as from 1 to 10). How good were the publications in terms of readability and insight? How relevant were the projects and outputs to real-world problems and issues? How effective is the think tank in communicating its messages? Again, external and independent evaluators can be hired to make these highly subjective judgments.

In summary, to achieve maximum impact, think tanks should develop an overall strategic plan for the organization, plan their research projects consultatively with policy makers, engage their audiences through channels that are carefully designed to reach the right people using the right method and, finally, measure the outcomes of their work to ensure the goals were met.

Prepared with the assistance of CIGI Public Affairs Coordinator Kelly Lorimer.