Measuring the outputs of a process or service may seem like quite an easy thing to do. Unfortunately many organisations get this wrong. What is the impact of getting this wrong? Read on…
Recently I undertook some consultancy work for a Local Government department. They had a problem delivering a certain service within the regulatory timeframes for many years. They were required to report the performance of this service both to the community through their annual report and also to the State Government. The poor results had been under high scrutiny for many years and many internal reviews and improvement projects had been undertaken. While there had been significant improvements, the outputs remained consistently below the targets (which were quite low).
When I was requested to review the process the first action was to define the problem. Now this seemed quite easy and together with the Manager and Team Leader we put together a problem definition statement along the lines of meeting the KPI targets and quality outputs. On the first day of actually reviewing and challenging what I was seeing, it became quite apparent that the data being used to populate the reports was in fact questionable at best. They had a problem with the data integrity!
I decided to collect some evidence of what I suspected. Over the next two weeks I collected my own data to compare with the official data. Not to my surprise there was significant difference. It turned out that on many occasions the process had in fact met or exceeded the targets. One of the major problems was in fact with the data collection and integrity rather than the process. There were concerns over the process, however without first having a reliable data collection and reporting method it was impossible to measure the impact of any improvements.
There are two morals to this story.
1. Measure the right things in the right way
2. The problem definition should be more than a perceived problem. Challenge your PDS through root cause analysis