Chapter 10 – 10.28 – Metrics and Key Performance Indicators (KPIs)

10.28.1 Purpose

Metrics and key performance indicators measure the performance of solutions, solution components, and other matters of interest to stakeholders.

10.28.2 Description

A metric is a quantifiable level of an indicator that an organization uses to measure progress. An indicator identifies a specific numerical measurement that represents the degree of progress toward achieving a goal, objective, output, activity, or further input. A key performance indicator (KPI) is one that measures progress towards a strategic goal or objective. Reporting is the process of informing stakeholders of metrics or indicators in specified formats and at specified intervals.

Metrics and reporting are key components of monitoring and evaluation. Monitoring is a continuous process of data collection used to determine how well a solution has been implemented as compared to the expected results. Evaluation is the systematic and objective assessment of a solution both to determine its status and effectiveness in meeting objectives over time and to identify ways to improve the solution to better meet objectives. The top priorities of a monitoring and evaluation system are the intended goals and effects of a solution, as well as inputs, activities, and outputs.

10.28.3 Elements

.1 Indicators

An indicator displays the result of analyzing one or more specific measures for addressing a concern about a need, value, output, activity, or input in a table or graphical form. Each concern requires at least one indicator to measure it properly, but some may require several.

A good indicator has six characteristics:

  • Clear: precise and unambiguous.
  • Relevant: appropriate to the concern.
  • Economical: available at reasonable cost.
  • Adequate: provides a sufficient basis on which to assess performance.
  • Quantifiable: can be independently validated.
  • Trustworthy and Credible: based on evidence and research.

In addition to these characteristics, stakeholder interests are also important. Certain indicators may help stakeholders perform or improve more than others.

Over time, weaknesses in some indicators can be identified and improved. Not all factors can be measured directly. Proxies can be used when data for direct indicators are not available or when it is not feasible to collect at regular intervals.

For example, in the absence of a survey of client satisfaction, an organization might use the proportion of all contracts renewed as an indicator.

When establishing an indicator, business analysts will consider its source, method of collection, collector, and the cost, frequency, and difficulty of collection.

Secondary sources of data may be the most economical, but to meet the other characteristics of a good indicator, primary research such as surveys, interviews, or direct observations may be necessary. The method of data collection is the key driver of a monitoring, evaluation, and reporting system’s cost.

.2 Metrics

Metrics are quantifiable levels of indicators that are measured at a specified point in time. A target metric is the objective to be reached within a specified period. In setting a metric for an indicator, it is important to have a clear understanding of the baseline starting point, resources that can be devoted to improving the factors covered by the indicator, and political concerns.

A metric can be a specific point, a threshold, or a range. A range can be useful if the indicator is new. Depending on the need, the scope of time to reach the target metric can be multi-year, annual, quarterly, or even more frequent.

.3 Structure

Establishing a monitoring and evaluation system requires a data collection procedure, a data analysis procedure, a reporting procedure, and the collection of baseline data. The data collection procedure covers units of analysis, sampling procedures, data collection instruments to use, collection frequency, and responsibility for collection. The analysis method specifies both the procedures for conducting the analysis and the data consumer, who may have strong interests in how the analysis is conducted. The reporting procedure covers the report templates, recipients, frequency, and means of communication. Baseline information is the data provided immediately before or at the beginning of a period to measure. Baseline data is used both to learn about recent performance and to measure progress from that point forward. It needs to be collected, analyzed, and reported for each indicator.

There are three key factors in assessing the quality of indicators and their metrics: reliability, validity, and timeliness. Reliability is the extent to which the data collection approach is stable and consistent across time and space. Validity is the extent to which data clearly and directly measures the performance the organization intends to measure. Timeliness is the fit of the frequency and latency of data to the management’s need.

.4 Reporting

Typically, reports compare the baseline, current metrics, and target metrics with calculations of the differences presented in both absolute and relative terms. In most situations, trends are more credible and important than absolute metrics.

Visual presentations tend to be more effective than tables, particularly when using qualitative text to explain the data.

10.28.4 Usage Considerations

.1 Strengths

  • Establishing a monitoring and evaluation system allows stakeholders to understand the extent to which a solution meets an objective, as well as how effective the inputs and activities of developing the solution (outputs) were.
  • Indicators, metrics, and reporting also facilitate organizational alignment, linking goals to objectives, supporting solutions, underlying tasks, and resources.

.2 Limitations

  • Gathering excessive amounts of data beyond what is needed will result in unnecessary expense in collecting, analyzing, and reporting. It will also distract project members from other responsibilities. On Agile projects, this will be particularly relevant.
  • A bureaucratic metrics program fails from collecting too much data and not generating useful reports that will allow timely action. Those charged with collecting metric data must be given feedback to understand how their actions are affecting the quality of the project results.
  • When metrics are used to assess performance, the individuals being measured are likely to act to increase their performance on those metrics, even if this causes sub-optimal performance on other activities.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *