file under: Program Management

A Strategic Approach to Program Metrics

Continuing the Webinar Conversation

CDM Smith program management leaders, Tom Lutzenberger, Gerry Benson and Brian McCarthy, held a webinar, “Complexity Under Control: Defining and Achieving Program Expectations,” which focused on the key components and benefits to using a program management plan to efficiently manage major capital projects. Time ran out before they could answer the last question regarding key metrics that should be integrated into a program plan.

Here’s their response:

“When talking about key metrics, what strategies are typically used for monitoring purposes?”

The purpose of establishing metrics is to create a series of indicators that can be used to quickly assess program health through focused attention on those programmatic activities and tasks that are of interest or concern to owners.

With most programs comprising numerous projects and tasks, project-related metrics are also gathered for individual design and construction contracts. Where the progress of an individual project or task impacts the overall program or suggests a trend that needs to be monitored for all projects and/or tasks, the metric becomes programmatic.

A strategic approach to identifying these programmatic metrics involves the following steps:

1. Start with detailing the metrics for activities on the critical path. These should include individual project and task metrics that will best measure critical elements of performance and consistent progress with program expectations. For example, procurement, permitting and right-of-way activities or even owner-managed activities such as O&M and warranty services are all critical elements that should be constantly checked. Metrics for these phases often address process duration compliance and conflicts with competing phase work.

2. Identify a consistent way to present and prioritize critical metrics. With so many activities happening simultaneously, all stakeholders should be able to analyze each metric in the same way. It’s important that all stakeholders involved in a program understand clearly how to read metrics to assess the health of a task or project. Using a ‘stop-and-go’ concept, the program health of a task or a project is displayed as green-good, yellow-changing and red-poor. This clear display of performance will help stakeholders make more cohesive decisions as they can set thresholds based on the same understanding of the metric.

3. Differentiate which thresholds deserve the most attention. Measurements (numbers, percentages, and durations) are not necessarily indicators of issues worthy of monitoring. Thresholds should be identified and indicate when monitoring should proceed and when action should be initiated. The thought process for and discussion on these thresholds should be documented and used in the staff training for any procedure utilizing that metric. These thresholds should be prioritized on a program to facilitate assigning the most staff time to tracking the most critical, metric-defined situations.

4. Define which approach will be used to attain metric information. Based on the priority established, the approach to monitoring can be accomplished in three ways. 100 Percent Surveillance is most appropriate method for infrequent tasks or tasks with stringent performance requirements. Random Sampling is used mostly for recurring tasks. Periodic Inspection works best for tasks that occur infrequently.

5. Figure out what to look for regarding variances. Investigating the causes for variations from baseline expectations can become complicated and should be managed closely. Keep an eye out for these three factors: fluctuations from baseline program expectations for schedule, cost, scope and quality program activities. Program trends, whether upward or downward, can also add time as a variable to the analysis of a change on the program’s progress. The usefulness and accuracy of results should also be assessed to avoid acting on negative variances or trends that aren’t grounded in a cost-benefit calculation. Certain variances can be considered acceptable without mitigation as long as the backup justification supports this option in a manner acceptable to the client.

6. Use performance reporting to further goals for alignment, streamlining and reinvention. Lessons learned will help improve future program performance and monitoring. As metrics are monitored and analyzed through the implementation of the program, the needs for all involved stakeholders will be better defined and manageable. It’s also important to remember documenting in detail each action taken with each change to the original plan. Reporting these actions only helps to show the publicizing of good management performance as well as accurate progress.

Missed the webinar? A full recap of the main points made by the speakers will be coming out soon.