Factor analysis and principal components analysis are statistical techniques used to simplify patterns of correlations. For example, if you gave three tests to a group of people, and each person got the same score on all three tests, you'd assume the three tests were all measuring the same thing. Similarly, if you correlate three tests and find that all the correlation coefficients are 1.00, denoting perfect correlation, you would conclude that the three measures actually all measured a single factor or component. On the other hand, if the coefficients were all 0.00, denoting a complete lack of correlation, you would conclude that the three measures were each measuring a different factor or factors. Factor analysis and principal components analysis attempt to draw similar conclusions when the pattern of correlations is less obvious.
I say they attempt to do that because these techniques are exploratory. That is, there is no generally accepted significance test which will help you decide if a factor structure underlies a pattern of correlations, or, if one exists, which of the possible factor structures it is. Therefore the factor structure you consider most likely must be confirmed empirically. For example, factor analysis and principal components analysis are often used to choose items for attitude scales. You cannot assume, though, that the items selected are necessarily all measures of the same attitude just because they all were measures on the same factor or component in the statistical analysis. You have to administer the scale to people and then assess its reliability. Of course, if you construct a theory out of the results of factor or principal components analysis you have to confirm the theory experimentally.
The results of exploratory analysis are always speculative, and we cannot have the same confidence in them that we can have in statistical tests of properly framed hypotheses. We'll look at this issue again next week in discussing the interpretation of focus groups.