Making Business Sense of CMMI Level 4

[article]
Summary:
For some reason, the mystique of CMMI Level 4 seems to be wrapped around control charts—one of the methods used for statistical analysis of data. While use of control charts is almost always present in statistical analysis of software processes, Ed Weller all too often sees the reason for using statistical methods—and the reasoning behind the superficial analysis—lost in the concern for "building control charts to show that Company X is CMMI Level 4." Ed Weller offers valuable insight on CMMI Level 4 and what it really signifies.

At a conference discussion on high-maturity processes that I attended, the most frequently asked question was "How many control charts do I need to be Level 4?" Perhaps a better question would have been "How can we use control charts to improve our business performance?" An even better question is, "How can Level 4 improve our business?"

Readers unfamiliar with the CMMI® should scan other articles on StickyMinds.com or reference material at Software Engineering Institute. I will not talk about the techniques of various statistical methods; those desiring an in-depth explanation of statistical methods should look to books or articles that explain these topics in detail. The focus of this article will be to understand why statistical methods are useful, how to use them, and some of the common pitfalls I have seen in organizations that overlook the key question "How does this benefit our business?"

The phrase "business objectives" is used thirteen times in the CMMI Level 4 process area descriptions, providing clear direction for implementing Level 4. In a nutshell, Level 4 is about developing performance baselines—improvement objectives that are quantitative (numerical) improvements of the performance baseline measurements, analysis techniques that measure process performance (control), and the prediction of project results against estimates.

The term "quantitative project management" is one of the process areas at Level 4. In the most basic definition, it means that you can accurately estimate using past performance baselines; make commitments for cost, schedule, and quality with narrow variation from the desired values; and manage using techniques that provide early warning of deviation with the opportunity to take preventive or proactive action to keep on track.

Many companies first run into trouble when attempting to implement Level 4 because they have not established baselines for performance. For instance, what is the estimating accuracy for effort or schedule? If you do not know the current values, how can you set an objective for a 20 or 50 percent improvement in estimating accuracy? If you do not understand the current process and results for estimation, how can you change and improve estimating?

If you are working on projects with fixed-price contracts, the inability to estimate accurately will cause you to lose money (underestimating) or lose business (overestimating). If you develop software to sell or support internal operations, poor estimating will lead to committing to the wrong projects. This applies especially when there is large variation in accuracy between two projects; you will commit to a project with equal benefit and higher eventual costs, or fail to invest in the right project because it is overestimated. Reducing variation benefits the business, and industry reports show instances of cost/effort variation below 5 percent in high-maturity organizations (see Performance Results of CMMI-Based Process Improvement).

Understanding the real aspects/attributes of process performance versus building control charts purporting to show stable processes is another misapplication of Level 4 methods. A control chart showing that you have a stable process is meaningless unless it contains a useful and valid relationship to the work being performed.Two fundamental characteristics of the inspection process that can be evaluated with control charts are the preparation rate (number of pages per person per hour of effort spent in individual review) and defect density (the number of major defects per size inspected). Inspection process results usually are better when preparation rate is under control, and defect density is one measure of the value of inspections.

I recently saw an analysis of inspection data where an organization used control charts to show that the process was under control, but a fundamental oversight compromised the analysis. Both preparation rate and defect density require a size measure. In the case of functional specifications, a measure often used is pages. In this case, the product was a legacy development where the new functions were incorporated into the existing functional specification and noted by change bars.

Size can be counted two ways: by the number of pages with change bars (easy to do) or by a more exact count of just the new and changed lines (potentially inaccurate due to manual counting and somewhat time-consuming for this particular organization, as a tool did not exist for the company in its environment). The company used both ways to count size, and both methods produced data that generated preparation rate control charts showing a "controlled" process. Defect density was represented differently; counting full pages showed a controlled process, but counting changed lines indicated many out-of-control data points. The process analysis for stability depended on the true relationship between size, effort, and defects. It was important to understand the actual process stability, as the results of inspections showed a large variation in the defect density when measured against the new and changed functional specification material. Were the low defect densities the result of poor process execution or good functional specifications? Were the high defect densities the result of good process execution and poor quality specifications? The organization could not make a decision because the gathered data was so convoluted.

These questions can be evaluated by looking at the process execution. Going back to the preparation rate control charts, it was clear that they were significantly different depending on which counting algorithm was used. When full pages were used as the size measure, the process control limits were at least three times the mean value, an indication that the process was not very capable and that the stability was in fact a byproduct of the data and not a true evaluation of the process.

What was overlooked in the initial definition, which was chosen to "make it easier to count," was the fundamental relationship expressed by the calculation and pages per unit of effort. The counting method equated effort used to make a one-line change to a full page of new material. While the control chart did not have obvious out-of-control indications, the relationship of effort to size was not valid.Note that in these situations some amount of "surround material" typically is reviewed to ensure the change is correct in context. The amount of surround material is hard to specify by any formula. Approximations will work reasonably well, but counting surround material to get a more accurate relationship between effort and size is one of the more difficult issues to be addressed in the inspection process.

The real problem in the example was that using full pages as the size measure resulted in a defect density control that appeared to be controlled. When the size was measured in a way more consistent with the effort spent, we saw a defect density control chart where nearly half the data points were outside the control limits. Understanding why the two results were different and, in particular, identifying the cause of the defect density variation were important from both a quality and productivity perspective.

Were we not finding defects, or were we wasting time inspecting functional specifications that had no defects? If the latter case, what was the cause? Was it the product or function that was less complex, unique to certain persons, or a function of a different upstream process?

These are not easy questions to resolve, and corrective action is obviously different for each cause. In this case it would be necessary to look at more than just the data. Understanding the culture, the people performing the process, the end result of products delivered to test and eventually the customer, and the earlier process steps all proved important to the final outcome.

Some key points to remember:

    1. Use statistical and quantitative analysis to understand and improve your business
    2. Do not let "making good control charts" be the goal
    3. Ensure relationships shown in the control charts actually reflect the work being done and the relationship of the data in the real world
    4. Understand that some of the process data you evaluate is based on product characteristics (defect density) that may have causes outside or unrelated to the process being analyzed
    5. Look outside the process and product to the people executing the process for help
    6. If CMMI Level 4 doesn't help your business, why are you doing it (see No. 1)

® CMMI is registered in the U.S. Patent and Trademark Office by Carnegie Mellon University

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.