Decisionmaking index
Statistical analysis and research are simply formal and effective ways to make decisions. That is, they're effective when they're performed correctly and their results are interpreted correctly. The articles listed here describe some important considerations in the exploitation of statistics and research. A couple deal with mundane subjects (the lottery, for example) but they all demonstrate principles of decisionmaking.directory
Essential concepts
Big data sets/information overload
Data mining
Databases
Polls
Surveys
Tests and measurement
When analysts go bad
Miscellaneous
the archive
Essential concepts:Why We Experiment - and what dependent and independent variables are
Getting Centred - finding the average
The Uses of Variation - an introduction
In Search of the False - the criterion of falsifiability
The Science of Conjecture - why research is about disagreement
A Good Shave - Ockham's (or Occam's) razor
Concepts and Operations - operational definitions
On Significance and Insignificance - both statistical and practical varieties
Is the Difference Real? - advantages of testing for statistical significance
Will That Be One Tail or Two? - types of hypothesis
Expectations - and observations
How to Compare Apples and Oranges - with statistical standardization
Eliminating Accidental Weighting - with statistical standardization
Taking a Fit - fine points of statistical standardization
Who's Afraid of Small Samples? - why bigger is not always better
Beware of Big Samples - big samples often mean tiny results
The Logic of Second Opinions - assessing how well people agree
Secrets of Weight - and stratified sampling
Correlation and Explanation
Inflated Correlations
Deflating Correlations
Better Living through Regression Analysis - deciphering relationships
What is a Trend? - consumers' guide to trend analysis
The Error Rate Problem - the perils of too much significance testing
Scaling - exploiting correlation
Why Good Statistics Go Bad - regression to the mean
Getting control – making fair comparisons
Conquering the Placebo Effect
Looking Beyond the Halo – one way we leap to unjustified conclusions
Size and significance – effect size
Quality and Quantity – a false dichotomy in measurement
Making Your Data Talk – and tell the truth, too
The Data Talk Some More – how simple analysis clarifies data
The Truth About Skew
Are You Objective?
Analyzing Change
Collapsible Data
Exploiting Percentiles
Getting Your Percentage – the fine points of using an effective tool
Data Economics – less is more
Stems and scales – and how to analyze them
Factor analysis – and principal components analysis
Focus groups – a how-not-to guide
Deciding Where to Measure – levels of intervention and measurement
Perils of Meta-Analysis
Value – and how to estimate it
Dissonance and Satisfaction – cognitive dissonance reduction
Transparent Evaluation
Pitfalls of Generalization
Pilots and Follow-ups – making sure results generalize
Perils of Eyeballing – with a selection example
Evaluating Success Rate – with the chi-square test
Law and Fallacy – the gambler's fallacy/"law of averages"
Function Follows Form – form and function in evaluation
Back to the top | Back to the home pageBig data sets/information overload:
Fat-Free Research - it's good, and good for you!
Plethoratology - staying afloat in a deluge of data
Data Refining - reducing complexity
Information from Scratch - how not to confuse data with information
Back to the top | Back to the home pageData mining:
Perils of Data Mining - avoiding analytical cave-in
Down the Data Mine, part 1 – data mining safety (modelling problems)
Down the Data Mine, part 2 – data mining safety (sampling)
Down the Data Mine, part 3 – data mining safety (data management)
Back to the top | Back to the home pageDatabases:
The Real Database - how to recognize it
PC Database Fitness (part 1) - how to maintain a healthy PC database
PC Database Fitness (part 2)
PC Database Fitness (part 3)
The Raw Database
Accuracy is not Enough
Back to the top | Back to the home pagePolls:
The great polling mystery – what pollsters don't tell you
The Non-confidence Interval – how accurate are poll results?
Back to the top | Back to the home page >Surveys:
The Satisfactory Satisfaction Survey - getting the most out of satisfaction surveys
The Value of Dissatisfaction
In Defence of Opinion Surveys - the art and science of good surveys
The Best Questions to Ask - and they don't involve rating scales
Ranks
A Question of Priorities - advantages of forced choice items
The Value of Negativity - advantages of negatively worded items
Back to the top | Back to the home pageTesting:
Testing the Tests - a non-technical guide to testing
Reliability - more about accuracy of measurement
Validity - more about the relevance of measurement
Back to the top | Back to the home pageWhen analysts go bad:
On Uninformation - and how to avoid it
The Curse of the Baby Boomers - the lure of the stereotype
Was this Analysis Necessary? - on fear of the obvious
Pitfalls of Profiles - the dangers of generalization
The Most Bogus Sports Statistic - and there are so many
Back to the top | Back to the home pageMiscellaneous:
Three Myths of the Information Age - overestimating the power of technology
Principles of Misinformation
Means and Ends
The Myth of Information Technology
Just Say No to Trivia
Contemporary cargo cults - the culture of evaluation
Consumers' Guide: Playing the Lottery - how the "truth" misleads <
Is suicide hereditary?
The Big Bad Database BR>The Good Attitude
The Truth about Baseball
Against the Odds - some gambling ed.
Gambling and Investment - some more gambling ed.
Lightning, Lotteries, and Probability
Back to the top | Back to the home page