Safety professionals are showing more and more interest in using leading measures of safety performance to replace traditional trailing measures such as recordable injury rates. Trailing measures are of course necessary, as they are equivalent to the "bottom line" in an accounting statement. They also reveal where injuries occur so action may be taken to prevent recurrence. However, to get the statistics, someone has to get hurt or sick. Wouldn't it be better to have predictive measures that would alert managers and employees that safety systems were breaking down so action could be taken to prevent the injury or illness in the first place?
Most safety professionals agree - we need leading indicators for safety. The challenge has been not only to find useful leading measures but to also convince line managers to use them. Fortunately a critical mass is building that should help safety professionals meet both challenges. One important event is the upcoming report of the Organization Resources Counselors (ORC) Alternative Metrics Task Force. This ORC-led group of 55 Fortune 500 companies has worked for several years to develop a framework for building a set of leading, trailing, and financial measures for safety performance. Admirably, they will share their results with the rest of the business community.
My goal in this paper is to add to this critical mass by examining the best practices of companies already using leading measures of safety performance. These best practices, along with examples to illustrate them, should provide readers with ideas they may adapt for their own organizations.
Most organizations use trailing measures of recordable injury rate or lost time accident rate as their sole measure of safety performance. There are several reasons for this:
OSHA requires them.
Injury rates provide (in theory) a way to compare safety performance across an industry.
Safety professionals have traditionally used them.
A growing body of literature, however, shows that injury rate data are not as useful as one might think.
Some of the dissatisfaction deals with using injury rates as benchmarks; studies have shown that the data are not as consistent as one would expect. The ORC Alternative Metrics Task Force identified many factors that impact whether someone will report an injury at all (Newell, 2001). Some of these factors are potential disciplinary action, incentive plans, benefit plan coverage, and the relationship between worker and supervisor. It is also believed that the more pressure managers apply to get low injury numbers, the more likely it is that injuries will not be reported properly. There have also been differences in how consistently individuals have reported data on OSHA forms. As a result, what should be a valid trailing measure becomes suspect (Stricoff, 2000).
Another point of dissatisfaction is that injury rates are less meaningful in smaller work units, especially as safety performance improves. A team of ten people may work for years without an injury.