Logic model
A logic model (also known as a logical framework, theory of change, or program matrix) is a tool used by funders, managers, and evaluators of programs to evaluate the effectiveness of a program. They can also be used during planning and implementation.[1] Logic models are usually a graphical depiction of the logical relationships between the resources, activities, outputs and outcomes of a program.[2] While there are many ways in which logic models can be presented, the underlying purpose of constructing a logic model is to assess the "if-then" (causal) relationships between the elements of the program.
Versions
In its simplest form, a logic model has four components:[3]
Inputs | Activities | Outputs | Outcomes/impacts |
---|---|---|---|
what resources go into a program | what activities the program undertakes | what is produced through those activities | the changes or benefits that result from the program |
e.g. money, staff, equipment | e.g. development of materials, training programs | e.g. number of booklets produced, workshops held, people trained | e.g. increased skills/ knowledge/ confidence, leading in longer-term to promotion, new job, etc. |
Following the early development of the logic model in the 1970s by Carol Weiss, Joseph Wholey and others, many refinements and variations have been added to the basic concept. Many versions of logic models set out a series of outcomes/impacts, explaining in more detail the logic of how an intervention contributes to intended or observed results.[4] This will often include distinguishing between short-term, medium-term and long-term results, and between direct and indirect results.
Some logic models also include assumptions, which are beliefs the prospective grantees have about the program, the people involved, and the context and the way the prospective grantees think the program will work, and external factors, consisting of the environment in which the program exists, including a variety of external factors that interact with and influence the program action.
University Cooperative Extension Programs in the US have developed a more elaborate logic model, called the Program Action Logic Model, which includes six steps:
- Inputs (what we invest)
- Outputs:
- Activities (the actual tasks we do)
- Participation (who we serve; customers & stakeholders)
- Engagement (how those we serve engage with the activities)
- Outcomes/Impacts:
- Short Term (learning: awareness, knowledge, skills, motivations)
- Medium Term (action: behavior, practice, decisions, policies)
- Long Term (consequences: social, economic, environmental etc.)
In front of Inputs, there is a description of a Situation and Priorities. These are the considerations that determine what Inputs will be needed.
The University of Wisconsin Extension offers a series of guidance documents[5] on the use of logic models. There is also an extensive bibliography[6] of work on this program logic model.
Advantages
By describing work in this way, managers have an easier way to define the work and measure it. Performance measures can be drawn from any of the steps. One of the key insights of the logic model is the importance of measuring final outcomes or results, because it is quite possible to waste time and money (inputs), "spin the wheels" on work activities, or produce outputs without achieving desired outcomes. It is these outcomes (impacts, long-term results) that are the only justification for doing the work in the first place. For commercial organizations, outcomes relate to profit. For not-for-profit or governmental organizations, outcomes relate to successful achievement of mission or program goals.
Uses of the logic model
Program planning
One of the most important uses of the logic model is for program planning. Here it helps managers to 'plan with the end in mind' Stephen Covey, rather than just consider inputs (e.g. budgets, employees) or just the tasks that must be done. In the past, program logic has been justified by explaining the process from the perspective of an insider. Paul McCawley (no date) outlines how this process was approached:
- We invest this time/money so that we can generate this activity/product.
- The activity/product is needed so people will learn how to do this.
- People need to learn that so they can apply their knowledge to this practice.
- When that practice is applied, the effect will be to change this condition
- When that condition changes, we will no longer be in this situation.
While logic models have been used in this way successfully, Millar et al. (1999) has suggested that following the above sequence, from the inputs through to the outcomes, could limit one’s thinking to the existing activities, programs and research questions. Instead, by using the logic model to focus on the intended outcomes of a particular program the questions change from ‘what is being done?’ to’ what needs to be done?’ McCawley (no date) suggests that by using this new reasoning, a logic model for a program can be built by asking the following questions in sequence:
- What is the current situation that we intend to impact?
- What will it look like when we achieve the desired situation or outcome?
- What behaviors need to change for that outcome to be achieved?
- What knowledge or skills do people need before the behavior will change?
- What activities need to be performed to cause the necessary learning?
- What resources will be required to achieve the desired outcome?
By placing the focus on ultimate outcomes or results, planners can think backwards through the logic model to identify how best to achieve the desired results. Planners therefore need to understand the difference between the categories of the logic model.
Performance evaluation
The logic model is often used in government or not-for-profit organizations, where the mission and vision are not aimed at achieving a financial benefit. In such situations, where profit is not the intended result, it may be difficult to monitor progress toward outcomes. A program logic model provides such indicators, in terms of output and outcome measures of performance. It is therefore important in these organizations to carefully specify the desired results, and consider how to monitor them over time. Often, such as in education or social programs, the outcomes are long-term and mission success is far in the future. In these cases, intermediate or shorter-term outcomes may be identified that provide an indication of progress toward the ultimate long-term outcome.
Traditionally, government programs were described only in terms of their budgets. It is easy to measure the amount of money spent on a program, but this is a poor indicator of mission success. Likewise it is relatively easy to measure the amount of work done (e.g. number of workers or number of years spent), but the workers may have just been 'spinning their wheels' without getting very far in terms of ultimate results or outcomes. The production of outputs is a better indicator that something was delivered to customers, but it is still possible that the output did not really meet the customer's needs, was not used, etc. Therefore, the focus on results or outcomes has become a mantra in government and not-for-profit programs.
The President's Management Agenda[7] is an example of the increasing emphasis on results in government management. It states:
"Government likes to begin things — to declare grand new programs and causes. But good beginnings are not the measure of success. What matters in the end is completion. Performance. Results."[8]
However, although outcomes are used as the primary indicators of program success or failure they are still insufficient. Outcomes may easily be achieved through processes independent of the program and an evaluation of those outcomes would suggest program success when in fact external outputs were responsible for the outcomes (Rossi, Lipsey and Freeman, 2004). In this respect, Rossi, Lipsey and Freeman (2004) suggest that a typical evaluation study should concern itself with measuring how the process indicators (inputs and outputs) have had an effect on the outcome indicators. A program logic model would need to be assessed or designed in order for an evaluation of these standards to be possible. The logic model can and, indeed, should be used in both formative (during the implementation to offer the chance to improve the program) and summative (after the completion of the program) evaluations.
The logic model and other management frameworks
There are numerous other popular management frameworks that have been developed in recent decades. This often causes confusion, because the various frameworks have different functions. It is important to select the right tool for the job. The following list of popular management tools is suggested to indicate where they are most appropriate (this list is by no means complete).
Organizational assessment tools
Fact-gathering tools for a comprehensive view of the as-is situation in an organization, but without prescribing how to change it:
- Baldrige Criteria for Performance Excellence (United States)
- EFQM (Europe)
- SWOT Analysis (Strengths, Weaknesses, Opportunities, Threats)
- Skills audits
- Customer surveys
Strategic planning tools
For identifying and prioritizing major long-term desired results in an organization, and strategies to achieve those results:
- Strategic Vision (Writing a clear "picture of the future" statement)
- Strategy maps
- Portfolio Management (Managing a portfolio of interdependent projects)
- Backcasting from sustainability principles (ABCD-Process - Awareness and Vision of success in a sustainable society, Baseline analysis in relation to the vision, Creative actions and initiatives to close the gap, Decide on priorities and plan, implement and Evaluate progress) from The Natural Step.
- Participatory Impact Pathways Analysis (An approach for project staff and stakeholders to jointly agree on a vision, develop a logic model and an evaluation plan)
- Weaver's Triangle[9] (simply asks organisations to identify inputs, outcomes and outputs).
Program planning and evaluation tools
For developing details of individual programs (what to do and what to measure) once overall strategies have been defined:
- Program logic model (this entry)
- Work Breakdown Structure
- Managing for Results model
- Earned Value Management
- PART - Program Assessment Rating Tool (US federal government)
Performance measurement tools
For measuring, monitoring and reporting the quality, efficiency, speed, cost and other aspects of projects, programs and/or processes:
Process improvement tools
For monitoring and improving the quality or efficiency of work processes:
- PDCA - Plan-do-check-act (Deming)
- TQM - Total Quality Management (Shewhart, Deming, Juran) - A set of TQM tools is available.
- Six Sigma
- BPR - Business Process Reengineering
- Organizational Design
Process standardization tools
For maintaining and documenting processes or resources to keep them repeatable and stable:
- ISO 9000
- CMMI - Capability Maturity Model Integration
- Business Process Management (BPM)
- Configuration management
- Enterprise Architecture
Notes
- ↑ Innovation Network. "Logic model workbook" (PDF). Retrieved 28 August 2012.
- ↑ McCawley, Paul. "The logic model for program planning and evaluation" (PDF). University of Idaho.
- ↑ W. K. Kellogg Foundation (2001). W. K. Kellogg Foundation Logic Model Development Guide.
- ↑ Weiss, C.H. (1972). Evaluation Research. Methods for Assessing Program Effectiveness. Prentice-Hall, Inc., Englewood Cliffs, New Jersey
- ↑ guidance documents
- ↑ bibliography
- ↑ President's Management Agenda (2002)
- ↑ Results.gov: President's Management Agenda
- ↑ http://www.evaluationsupportscotland.org.uk/article.asp?id=9&node=gettingstarted
Wikimedia Commons has media related to Logic model. |
General References
- Millar, A., R.S. Simeone, and J.T. Carnevale. (2001). Logic models: a systems tool for performance management. Evaluation and Program Planning. 24:73-81.
- Hernandez, M. & Hodges, S. (2003). Crafting Logic Models for Systems of Care: Ideas into Action.
- Rossi, P., Lipsey, M.W., and Freeman, H.E. (2004). Evaluation. A systematic approach (7th ed.). Thousand Oaks, CA: Sage.
- McCawley, P.F. (2001). The Logic Model for Program planning and Evaluation. University of Idaho Extension. Retrieved at http://www.uiweb.uidaho.edu/extension/LogicModel.pdf
Other resources
- Alter, C. & Murty, S. (1997). Logic modeling: A tool for teaching practice evaluation. Journal of Social Work Education, 33(1), 103-117.
- Conrad, Kendon J., & Randolph, Frances L. (1999). Creating and using logic models: Four perspectives. Alcoholism Treatment Quarterly, 17(1-2), 17-32.
- Hernandez, Mario (2000). Using logic models and program theory to build outcome accountability. Education and Treatment of Children, 23(1), 24-41.
- Innovation Network's Point K Logic Model Builder (2006). A set of three online evaluation tools that includes a Logic Model Builder (requires registration).
- Julian, David A. (1997). The utilization of the logic model as a system level planning and evaluation device. Evaluation and Program Planning, 20(3), 251-257.
- McLaughlin, J. A., & Jordan, G. B. (1999). Logic models: A tool for telling your program's performance story. Evaluation and Program Planning, 22(1), 65-72.
- Stinchcomb, Jeanne B. (2001). Using logic modeling to focus evaluation efforts: Translating operational theories into practical measures. Journal of Offender Rehabilitation, 33(2), 47-65.
- Unrau, Y.A. (2001). Using client exit interviews to illuminate outcomes in program logic models: A case example. Evaluation and Program Planning, 24(4), 353-361.
- Usable Knowledge (2006). A 15-minute flash based tutorial on logic models.