Last week I mentioned that standard scores have several advantages for the decisionmaker. The first is that they literally allow you to compare apples with oranges.
For example, let's suppose that you are evaluating employees with three measures: sales, customer approval ratings, and supervisors' ratings. Sales are calculated in dollars, customer approval ratings have a maximum of 10, and supervisors' ratings have a maximum of 40. If you don't convert these ratings to standard scores, you are pretty well restricted to combining these measures by comparing each employee's rank on them. That is too crude a comparison, because ranks vary in significance according to how close an employee's performance is to the average.
In a normal distribution (which is the type of distribution you're usually dealing with) differences in ranks near the average represent less of a difference in ability than differences at the extremes of the distribution. In a group of 100 people, for example, the difference between the first and the fifth-ranked people is usually greater than the difference between the fifty-first and fifty-fifth ranked.
The fairest way to assess these measures involves converting them to standard scores before combining them. Let's suppose the mean score on the customer rating was 6, and that the standard deviation was 1.5 (you can calculate these figures easily with spreadsheet software and database software; statistical software will calculate the standard scores themselves). To convert a customer rating to a standard score, you first subtract the mean score from it. So if an employee's customer rating is 9 you subtract 6 from it to get a remainder of 3. You then divide this remainder by the standard deviation, which is 1.5. Three divided by 1.5 is 2, which is the standard score, also known as the z-score. A z-score of 2 simply means that the employee finished two standard deviations above the mean. A z-score of –2.0 means that he or she finished 2.0 standard deviations below the mean, and so on.
After converting these measures to z-scores you can then determine which measure, if any, each employee scored best or worst on. However, do not add them up without advice from a statistically trained person. The reason is that they may not all measure the same thing. The correlations between the variables have to be inspected, and with a larger set of ratings you would need to perform principal components of factor analysis. And do not arbitrarily weight the variables. You might think that sales were three times more important than the ratings, and so multiply the sales score by three. All this will do, though, is make the other two ratings irrelevant. Non-arbitrary weights could be derived from further statistical analysis, but you need to consult someone who knows how to do it. In general, if the three measures are not correlated with each other, you don't want to add them together. If, on the other hand, they are all correlated with each other then they are all measuring the same thing and weighting is unnecessary.
For an example of how transformation to standard scores can make data comparable, click here.
How to Compare Apples and Oranges © 1999, John FitzGerald
Home page | Decisionmakers' index | E-mail