Preface to the Second Edition: For the Instructor |
|
xxv | |
Preface to the Second Edition: For the Student |
|
xxxiii | |
Acknowledgments |
|
xxxvii | |
Part One Descriptive Statistics |
|
1 | (123) |
|
Introduction to Psychological Statistics |
|
|
1 | (20) |
|
|
1 | (11) |
|
What Is (Are) Statistics? |
|
|
1 | (1) |
|
|
2 | (1) |
|
|
2 | (1) |
|
|
3 | (3) |
|
Continuous versus Discrete Variables |
|
|
6 | (1) |
|
|
7 | (1) |
|
Parametric versus Nonparametric Statistics |
|
|
7 | (1) |
|
Independent versus Dependent Variables |
|
|
7 | (1) |
|
Experimental versus Correlational Research |
|
|
8 | (1) |
|
Populations versus Samples |
|
|
9 | (1) |
|
|
10 | (1) |
|
|
10 | (1) |
|
|
11 | (1) |
|
Basic Statistical Procedures |
|
|
12 | (6) |
|
Variables with Subscripts |
|
|
12 | (1) |
|
|
12 | (1) |
|
Properties of the Summation Sign |
|
|
13 | (3) |
|
|
16 | (1) |
|
|
17 | (1) |
|
|
18 | (1) |
|
|
18 | (3) |
|
|
18 | (1) |
|
Random Variables and Mathematical Distributions |
|
|
19 | (1) |
|
|
20 | (1) |
|
|
20 | (1) |
|
Frequency Tables, Grapphs, and Distributions |
|
|
21 | (28) |
|
|
21 | (11) |
|
|
21 | (1) |
|
The Cumulative Frequency Distribution |
|
|
22 | (1) |
|
The Relative Frequency and Cumulative Relative Frequency Distributions |
|
|
23 | (1) |
|
The Cumulative Percentage Distribution |
|
|
23 | (1) |
|
|
24 | (1) |
|
|
24 | (4) |
|
Real versus Theoretical Distributions |
|
|
28 | (1) |
|
|
29 | (2) |
|
|
31 | (1) |
|
Basic Statistical Procedures |
|
|
32 | (12) |
|
Grouped Frequency Distributions |
|
|
32 | (1) |
|
Apparent versus Real Limits |
|
|
32 | (1) |
|
Constructing Class Intervals |
|
|
33 | (1) |
|
Choosing the Class Interval Width |
|
|
33 | (1) |
|
Choosing the Limits of the Lowest Interval |
|
|
34 | (1) |
|
Relative and Cumulative Frequency Distributions |
|
|
35 | (1) |
|
Cumulative Percentage Distribution |
|
|
35 | (1) |
|
Finding Percentiles and Percentile Ranks by Formula |
|
|
36 | (3) |
|
Graphing a Grouped Frequency Distribution |
|
|
39 | (1) |
|
Guidelines for Drawing Graphs of Frequency Distributions |
|
|
40 | (2) |
|
|
42 | (1) |
|
|
43 | (1) |
|
|
44 | (5) |
|
|
44 | (3) |
|
|
47 | (1) |
|
|
47 | (1) |
|
|
48 | (1) |
|
Measures of Central Tendency and Variability |
|
|
49 | (40) |
|
|
49 | (18) |
|
Measures of Central Tendency |
|
|
49 | (4) |
|
|
53 | (8) |
|
|
61 | (4) |
|
|
65 | (1) |
|
|
66 | (1) |
|
Basic Statistical Procedures |
|
|
67 | (10) |
|
|
67 | (2) |
|
Computational Formulas for the Variance and Standard Deviation |
|
|
69 | (2) |
|
Obtaining the Standard Deviation Directly from Your Calculator |
|
|
71 | (1) |
|
|
72 | (1) |
|
Properties of the Standard Deviation |
|
|
73 | (2) |
|
|
75 | (1) |
|
|
76 | (1) |
|
|
77 | (12) |
|
|
77 | (3) |
|
|
80 | (1) |
|
|
81 | (2) |
|
|
83 | (2) |
|
|
85 | (1) |
|
|
86 | (1) |
|
|
86 | (3) |
|
Standardized Scores and the Normal Distribution |
|
|
89 | (35) |
|
|
89 | (15) |
|
|
89 | (2) |
|
Finding a Raw Score from a z Score |
|
|
91 | (1) |
|
|
91 | (1) |
|
|
92 | (1) |
|
|
93 | (1) |
|
|
94 | (2) |
|
Introducing Probability: Smooth Distributions versus Discrete Events |
|
|
96 | (1) |
|
Real Distributions versus the Normal Distribution |
|
|
97 | (1) |
|
z Scores as a Research Tool |
|
|
98 | (1) |
|
Sampling Distribution of the Mean |
|
|
99 | (1) |
|
Standard Error of the Mean |
|
|
100 | (1) |
|
Sampling Distribution versus Population Distribution |
|
|
101 | (1) |
|
|
102 | (1) |
|
|
103 | (1) |
|
Basic Statistical Procedures |
|
|
104 | (11) |
|
|
104 | (2) |
|
Finding the Area between Two z Scores |
|
|
106 | (1) |
|
Finding the Raw Scores Corresponding to a Given Area |
|
|
107 | (1) |
|
Areas in the Middle of a Distribution |
|
|
108 | (1) |
|
From Score to Proportion and Proportion to Score |
|
|
109 | (1) |
|
|
109 | (4) |
|
|
113 | (1) |
|
|
114 | (1) |
|
|
115 | (9) |
|
The Mathematics of the Normal Distribution |
|
|
115 | (1) |
|
The Central Limit Theorem |
|
|
116 | (1) |
|
|
117 | (4) |
|
|
121 | (1) |
|
|
122 | (1) |
|
|
123 | (1) |
Part Two One- and Two-Sample Hypothesis Tests |
|
124 | (117) |
|
Introduction to Hypothesis Testing: The One-Sample z Test |
|
|
124 | (30) |
|
|
124 | (13) |
|
Selecting a Group of Subjects |
|
|
124 | (1) |
|
The Need for Hypothesis Testing |
|
|
125 | (1) |
|
The Logic of Null Hypothesis Testing |
|
|
126 | (1) |
|
The Null Hypothesis Distribution |
|
|
126 | (1) |
|
The Null Hypothesis Distribution for the One-Sample Case |
|
|
127 | (1) |
|
z Scores and the Null Hypothesis Distribution |
|
|
128 | (1) |
|
|
129 | (1) |
|
The z Score as Test Statistic |
|
|
130 | (1) |
|
Type I and Type II Errors |
|
|
131 | (1) |
|
The Trade-Off between Type I and Type II Errors |
|
|
132 | (1) |
|
One-Tailed versus Two-Tailed Tests |
|
|
133 | (2) |
|
|
135 | (1) |
|
|
136 | (1) |
|
Basic Statistical Procedures |
|
|
137 | (12) |
|
|
137 | (2) |
|
Select the Statistical Test and the Significance Level |
|
|
139 | (1) |
|
Select the Sample and Collect the Data |
|
|
139 | (1) |
|
Find the Region of Rejection |
|
|
139 | (2) |
|
Calculate the Test Statistic |
|
|
141 | (1) |
|
Make the Statistical Decision |
|
|
142 | (1) |
|
|
143 | (1) |
|
Assumptions Underlying the One-Sample z Test |
|
|
143 | (1) |
|
Varieties of the One-Sample Test |
|
|
144 | (1) |
|
Why the One-Sample Test Is Rarely Performed |
|
|
145 | (1) |
|
Publishing the Results of One-Sample Tests |
|
|
146 | (1) |
|
|
147 | (1) |
|
|
148 | (1) |
|
|
149 | (5) |
|
|
152 | (1) |
|
|
152 | (1) |
|
|
153 | (1) |
|
Interval Estimation and the t Distribution |
|
|
154 | (28) |
|
|
154 | (11) |
|
The Mean of the Null Hypothesis Distribution |
|
|
155 | (1) |
|
When the Population Standard Deviation Is Not Known |
|
|
155 | (1) |
|
Calculating a Simple Example |
|
|
156 | (1) |
|
|
156 | (2) |
|
Degrees of Freedom and the t Distribution |
|
|
158 | (1) |
|
Critical Values of the t Distribution |
|
|
159 | (1) |
|
Calculating the One-Sample t Test |
|
|
160 | (1) |
|
Sample Size and the One-Sample t Test |
|
|
160 | (1) |
|
Uses for the One-Sample t Test |
|
|
161 | (1) |
|
Cautions Concerning the One-Sample t Test |
|
|
161 | (1) |
|
Estimating the Population Mean |
|
|
162 | (1) |
|
|
163 | (1) |
|
|
164 | (1) |
|
Basic Statistical Procedures |
|
|
165 | (9) |
|
|
165 | (1) |
|
Select the Level of Confidence |
|
|
165 | (1) |
|
Select the Random Sample and Collect the Data |
|
|
165 | (1) |
|
Calculate the Limits of the Interval |
|
|
166 | (4) |
|
Relationship between Interval Estimation and Null Hypothesis Testing |
|
|
170 | (1) |
|
Assumptions Underlying the One-Sample t Test and the Confidence Interval for the Population Mean |
|
|
170 | (1) |
|
Use of the Confidence Interval for the Population Mean |
|
|
171 | (1) |
|
Publishing the Results of One-Sample t Tests |
|
|
172 | (1) |
|
|
172 | (1) |
|
|
173 | (1) |
|
|
174 | (8) |
|
The Sampling Distribution of the Variance |
|
|
175 | (1) |
|
The Chi-Square Distribution |
|
|
176 | (1) |
|
Hypothesis Testing for Sample Variances |
|
|
176 | (2) |
|
The t Distribution Revisited |
|
|
178 | (1) |
|
Some Properties of Estimators |
|
|
178 | (1) |
|
|
179 | (1) |
|
|
179 | (1) |
|
|
180 | (2) |
|
The t Test for Two Independent Sample Means |
|
|
182 | (32) |
|
|
182 | (11) |
|
Null Hypothesis Distribution for the Differences of Two Sample Means |
|
|
183 | (1) |
|
Standard Error of the Difference |
|
|
184 | (1) |
|
Formula for Comparing the Means of Two Samples |
|
|
185 | (1) |
|
Null Hypothesis for the Two-Sample Case |
|
|
186 | (1) |
|
The z Test for Two Large Samples |
|
|
187 | (1) |
|
Separate-Variances t Test |
|
|
187 | (1) |
|
The Pooled-Variances Estimate |
|
|
188 | (1) |
|
The Pooled-Variances t Test |
|
|
189 | (1) |
|
Formula for Equal Sample Sizes |
|
|
189 | (1) |
|
Calculating the Two-Sample t Test |
|
|
190 | (1) |
|
Interpreting the Calculated t |
|
|
190 | (1) |
|
Limitations of Statistical Conclusions |
|
|
191 | (1) |
|
|
192 | (1) |
|
|
192 | (1) |
|
Basic Statistical Procedures |
|
|
193 | (13) |
|
|
194 | (1) |
|
Select the Statistical Test and the Significance Level |
|
|
194 | (1) |
|
Select the Samples and Collect the Data |
|
|
195 | (1) |
|
Find the Region of Rejection |
|
|
195 | (1) |
|
Calculate the Test Statistic |
|
|
196 | (1) |
|
Make the Statistical Decision |
|
|
197 | (1) |
|
|
197 | (1) |
|
Confidence Intervals for the Difference between Two Population Means |
|
|
198 | (2) |
|
Assumptions of the t Test for Two Independent Samples |
|
|
200 | (2) |
|
When to Use the Two-Sample t Test |
|
|
202 | (1) |
|
When to Construct Confidence Intervals |
|
|
203 | (1) |
|
Publishing the Results of the Two-Sample t Test |
|
|
203 | (1) |
|
|
204 | (1) |
|
|
205 | (1) |
|
|
206 | (8) |
|
Zero Differences between Sample Means |
|
|
206 | (1) |
|
Adding Variances to Find the Variance of the Difference |
|
|
206 | (1) |
|
When to Use the Separate-Variances t Test |
|
|
207 | (3) |
|
Heterogeneity of Variance as an Experimental Result |
|
|
210 | (1) |
|
|
210 | (1) |
|
|
211 | (1) |
|
|
212 | (2) |
|
Statistical Power and Effect Size |
|
|
214 | (27) |
|
|
214 | (11) |
|
The Alternative Hypothesis Distribution |
|
|
214 | (2) |
|
The Expected t Value (Delta) |
|
|
216 | (2) |
|
|
218 | (1) |
|
|
219 | (1) |
|
The Interpretation of t Values |
|
|
220 | (1) |
|
|
221 | (2) |
|
|
223 | (1) |
|
|
223 | (1) |
|
|
224 | (1) |
|
Basic Statistical Procedures |
|
|
225 | (7) |
|
|
225 | (1) |
|
The Relationship between Alpha and Power |
|
|
226 | (1) |
|
Power Analysis with Fixed Sample Sizes |
|
|
227 | (1) |
|
Sample Size Determination |
|
|
228 | (2) |
|
The Power of a One-Sample Test |
|
|
230 | (1) |
|
|
230 | (1) |
|
|
231 | (1) |
|
|
232 | (9) |
|
The Case of Unequal Sample Sizes |
|
|
232 | (1) |
|
The Case against Null Hypothesis Testing: The Null Is Never True |
|
|
233 | (1) |
|
The Limited Case for NHST |
|
|
233 | (1) |
|
The General Case for NHST: Screening Out Small Effect Sizes |
|
|
234 | (2) |
|
Supplementing the Null Hypothesis Test |
|
|
236 | (2) |
|
|
238 | (1) |
|
|
239 | (1) |
|
|
239 | (2) |
Part Three Hypothesis Tests Involving Two Measures on Each Subject |
|
241 | (83) |
|
|
241 | (31) |
|
|
241 | (12) |
|
|
241 | (1) |
|
|
242 | (1) |
|
The Correlation Coefficient |
|
|
242 | (2) |
|
|
244 | (1) |
|
|
244 | (2) |
|
Dealing with Curvilinear Relationships |
|
|
246 | (1) |
|
Problems in Generalizing from Sample Correlations |
|
|
247 | (2) |
|
Correlation Does Not Imply Causation |
|
|
249 | (1) |
|
True Experiments Involving Correlation |
|
|
250 | (1) |
|
|
250 | (1) |
|
|
251 | (2) |
|
Basic Statistical Procedures |
|
|
253 | (11) |
|
|
253 | (1) |
|
|
254 | (1) |
|
An Example of Calculating Pearson's r |
|
|
254 | (1) |
|
|
255 | (1) |
|
|
256 | (1) |
|
Testing Pearson's r for Significance |
|
|
256 | (2) |
|
Understanding the Degrees of Freedom |
|
|
258 | (1) |
|
Assumptions Associated with Pearson's r |
|
|
258 | (2) |
|
Uses of the Pearson Correlation Coefficient |
|
|
260 | (1) |
|
Publishing the Results of Correlational Studies |
|
|
261 | (1) |
|
|
261 | (1) |
|
|
262 | (2) |
|
|
264 | (8) |
|
The Power Associated with Correlational Tests |
|
|
264 | (2) |
|
|
266 | (1) |
|
The Confidence Interval for p |
|
|
266 | (1) |
|
Testing a Null Hypothesis Other Than p = 0 |
|
|
267 | (1) |
|
Testing the Difference of Two Independent Sample r's |
|
|
268 | (1) |
|
|
269 | (1) |
|
|
269 | (1) |
|
|
270 | (2) |
|
|
272 | (29) |
|
|
272 | (11) |
|
|
272 | (1) |
|
|
273 | (1) |
|
|
273 | (1) |
|
Regression toward the Mean |
|
|
274 | (1) |
|
Graphing Regression in Terms of z Scores |
|
|
274 | (1) |
|
The Raw-Score Regression Formula |
|
|
275 | (1) |
|
The Slope and the Y Intercept |
|
|
276 | (1) |
|
Predictions Based on Raw Scores |
|
|
277 | (1) |
|
Interpreting the Y Intercept |
|
|
278 | (1) |
|
Quantifying the Errors around the Regression Line |
|
|
278 | (1) |
|
The Variance of the Estimate |
|
|
279 | (1) |
|
Explained and Unexplained Variance |
|
|
280 | (1) |
|
The Coefficient of Determination |
|
|
280 | (1) |
|
The Coefficient of Nondetermination |
|
|
281 | (1) |
|
Calculating the Variance of the Estimate |
|
|
281 | (1) |
|
|
281 | (1) |
|
|
282 | (1) |
|
Basic Statistical Procedures |
|
|
283 | (10) |
|
|
283 | (1) |
|
Regression in Terms of Sample Statistics |
|
|
283 | (1) |
|
Finding the Regression Equation |
|
|
284 | (1) |
|
|
284 | (1) |
|
Using Sample Statistics to Estimate the Variance of the Estimate |
|
|
285 | (1) |
|
Standard Error of the Estimate |
|
|
286 | (1) |
|
Confidence Intervals for Predictions |
|
|
286 | (1) |
|
An Example of a Confidence Interval |
|
|
287 | (1) |
|
Assumptions Underlying Linear Regression |
|
|
288 | (1) |
|
|
288 | (1) |
|
|
289 | (1) |
|
When to Use Linear Regression |
|
|
289 | (2) |
|
|
291 | (1) |
|
|
292 | (1) |
|
|
293 | (8) |
|
The Point-Biserial Correlation Coefficient |
|
|
293 | (1) |
|
|
294 | (1) |
|
Deriving rpb from a t Value |
|
|
295 | (1) |
|
|
296 | (1) |
|
Strength of Association in the Population (Omega Squared) |
|
|
296 | (1) |
|
|
297 | (1) |
|
|
298 | (1) |
|
|
298 | (1) |
|
|
299 | (2) |
|
|
301 | (23) |
|
|
301 | (9) |
|
|
301 | (1) |
|
The Direct-Difference Method |
|
|
302 | (1) |
|
The Matched t Test as a Function of Linear Correlation |
|
|
303 | (2) |
|
Reduction in Degrees of Freedom |
|
|
305 | (1) |
|
Drawback of the Before-After Design |
|
|
305 | (1) |
|
Other Repeated-Measures Designs |
|
|
305 | (1) |
|
|
306 | (1) |
|
Correlated or Dependent Samples |
|
|
307 | (1) |
|
When Not to Use the Matched t Test |
|
|
307 | (1) |
|
|
308 | (1) |
|
|
309 | (1) |
|
Basic Statistical Procedures |
|
|
310 | (10) |
|
|
310 | (1) |
|
Select the Statistical Test and the Significance Level |
|
|
310 | (1) |
|
Select the Samples and Collect the Data |
|
|
310 | (1) |
|
Find the Region of Rejection |
|
|
311 | (1) |
|
Calculate the Test Statistic |
|
|
312 | (1) |
|
Make the Statistical Decision |
|
|
312 | (1) |
|
Using the Correlation Formula for the Matched t Test |
|
|
312 | (1) |
|
Raw-Score Formula for the Matched t Test |
|
|
313 | (1) |
|
The Confidence Interval for the Difference of Two Population Means |
|
|
314 | (1) |
|
Assumptions of the Matched t Test |
|
|
315 | (1) |
|
The Varieties of Designs Calling for the Matched t Test |
|
|
315 | (2) |
|
Publishing the Results of a Matched t Test |
|
|
317 | (1) |
|
|
317 | (1) |
|
|
318 | (2) |
|
|
320 | (4) |
|
|
322 | (1) |
|
|
322 | (1) |
|
|
323 | (1) |
Part Four Analysis of Variance without Repeated Measures |
|
324 | (111) |
|
One-Way Independent ANOVA |
|
|
324 | (38) |
|
|
324 | (12) |
|
Transforming the t Test into ANOVA |
|
|
325 | (1) |
|
Expanding the Denominator |
|
|
326 | (1) |
|
|
326 | (1) |
|
|
327 | (1) |
|
The F Ratio As a Ratio of Two Population Variance Estimates |
|
|
327 | (1) |
|
Degrees of Freedom and the F Distribution |
|
|
328 | (1) |
|
The Shape of the F Distribution |
|
|
329 | (1) |
|
ANOVA As a One-Tailed Test |
|
|
329 | (1) |
|
|
330 | (1) |
|
An Example with Three Equal-Sized Groups |
|
|
330 | (1) |
|
Calculating a Simple ANOVA |
|
|
331 | (1) |
|
|
332 | (1) |
|
Advantages of the One-Way ANOVA |
|
|
333 | (1) |
|
|
334 | (1) |
|
|
334 | (2) |
|
Basic Statistical Procedures |
|
|
336 | (14) |
|
An ANOVA Example with Unequal Sample Sizes |
|
|
336 | (1) |
|
|
336 | (1) |
|
Select the Statistical Test and the Significance Level |
|
|
336 | (1) |
|
Select the Samples and Collect the Data |
|
|
336 | (1) |
|
Find the Region of Rejection |
|
|
337 | (1) |
|
Calculate the Test Statistic |
|
|
337 | (2) |
|
Make the Statistical Decision |
|
|
339 | (1) |
|
Interpreting Significant Results |
|
|
339 | (1) |
|
The Sums of Squares Approach |
|
|
340 | (1) |
|
|
340 | (2) |
|
Assumptions of the One-Way ANOVA for Independent Groups |
|
|
342 | (1) |
|
Varieties of the One-Way ANOVA |
|
|
343 | (2) |
|
Publishing the Results of a One-Way ANOVA |
|
|
345 | (2) |
|
|
347 | (1) |
|
|
348 | (2) |
|
|
350 | (12) |
|
Testing Homogeneity of Variance |
|
|
350 | (1) |
|
Effect Size and Proportion of Variance Accounted For |
|
|
351 | (3) |
|
|
354 | (3) |
|
|
357 | (1) |
|
|
358 | (1) |
|
|
359 | (3) |
|
|
362 | (29) |
|
|
362 | (11) |
|
The Number of Possible t Tests |
|
|
362 | (1) |
|
|
363 | (1) |
|
|
364 | (1) |
|
|
364 | (1) |
|
Fisher's Protected t Tests |
|
|
364 | (2) |
|
Complete versus Partial Null Hypotheses |
|
|
366 | (1) |
|
|
367 | (1) |
|
The Studentized Range Statistic |
|
|
367 | (1) |
|
Advantages and Disadvantages of Tukey's Test |
|
|
368 | (1) |
|
Other Procedures for Post Hoc Pairwise Comparisons |
|
|
369 | (2) |
|
The Advantage of Planning Ahead |
|
|
371 | (1) |
|
|
371 | (1) |
|
|
372 | (1) |
|
Basic Statistical Procedures |
|
|
373 | (10) |
|
Calculating Protected t Tests |
|
|
373 | (1) |
|
|
374 | (1) |
|
|
374 | (1) |
|
Interpreting the Results of the LSD and HSD Procedures |
|
|
375 | (1) |
|
Assumptions of the Fisher and Tukey Procedures |
|
|
376 | (1) |
|
Bonferroni t, or Dunn's Test |
|
|
376 | (1) |
|
|
377 | (3) |
|
|
380 | (1) |
|
Which Post Hoc Comparison Procedure Should You Use? |
|
|
380 | (1) |
|
|
381 | (1) |
|
|
382 | (1) |
|
|
383 | (8) |
|
|
383 | (1) |
|
|
384 | (1) |
|
Planned (a Priori) Comparisons |
|
|
385 | (3) |
|
|
388 | (1) |
|
|
388 | (1) |
|
|
389 | (2) |
|
|
391 | (44) |
|
|
391 | (16) |
|
Calculating a Simple One-Way ANOVA |
|
|
391 | (1) |
|
|
392 | (1) |
|
Regrouping the Sums of Squares |
|
|
393 | (1) |
|
|
393 | (1) |
|
Calculating the Two-Way ANOVA |
|
|
394 | (1) |
|
|
395 | (1) |
|
Calculating MSbet for the Drug Treatment Factor |
|
|
395 | (1) |
|
Calculating MSbet for the Gender Factor |
|
|
395 | (1) |
|
|
396 | (1) |
|
The Case of Zero Interaction |
|
|
397 | (1) |
|
|
398 | (1) |
|
Calculating the Variability Due to Interaction |
|
|
399 | (1) |
|
|
399 | (3) |
|
Separating Interactions from Cell Means |
|
|
402 | (1) |
|
The F Ratio in a Two-Way ANOVA |
|
|
403 | (1) |
|
Advantages of the Two-Way Design |
|
|
404 | (1) |
|
|
405 | (1) |
|
|
406 | (1) |
|
Basic Statistical Procedures |
|
|
407 | (14) |
|
State the Null Hypothesis |
|
|
407 | (1) |
|
Select the Statistical Test and the Significance Level |
|
|
408 | (1) |
|
Select the Samples and Collect the Data |
|
|
408 | (1) |
|
Find the Regions of Rejection |
|
|
408 | (1) |
|
Calculate the Test Statistics |
|
|
408 | (5) |
|
Make the Statistical Decisions |
|
|
413 | (1) |
|
The Summary Table for a Two-Way ANOVA |
|
|
413 | (1) |
|
|
413 | (1) |
|
Assumptions of the Two-Way ANOVA for Independent Groups |
|
|
414 | (1) |
|
Advantages of the Two-Way ANOVA with Two Experimental Factors |
|
|
415 | (1) |
|
Advantages of the Two-Way ANOVA with One Grouping Factor |
|
|
416 | (1) |
|
Advantages of the Two-Way ANOVA with Two Grouping Factors |
|
|
417 | (1) |
|
Publishing the Results of a Two-Way ANOVA |
|
|
417 | (1) |
|
|
418 | (1) |
|
|
419 | (2) |
|
|
421 | (14) |
|
Post Hoc Comparisons for the Two-Way ANOVA |
|
|
421 | (4) |
|
|
425 | (2) |
|
|
427 | (1) |
|
The Two-Way ANOVA for Unbalanced Designs |
|
|
428 | (3) |
|
|
431 | (1) |
|
|
432 | (1) |
|
|
433 | (2) |
Part Five Analysis of Variance with Repeated Measures |
|
435 | (72) |
|
|
435 | (37) |
|
|
435 | (11) |
|
Calculation of an Independent-Groups ANOVA |
|
|
435 | (1) |
|
The One-Way RM ANOVA as a Two-Way Independent ANOVA |
|
|
436 | (1) |
|
Calculating the SS Components of the RM ANOVA |
|
|
437 | (1) |
|
Comparing the Independent ANOVA with the RM ANOVA |
|
|
438 | (1) |
|
The Advantage of the RM ANOVA |
|
|
439 | (1) |
|
Picturing the Subject by Treatment Interaction |
|
|
440 | (1) |
|
Comparing the RM ANOVA to a Matched t Test |
|
|
440 | (2) |
|
Dealing with Order Effects |
|
|
442 | (1) |
|
Differential Carryover Effects |
|
|
443 | (1) |
|
The Randomized-Blocks Design |
|
|
443 | (1) |
|
|
444 | (1) |
|
|
445 | (1) |
|
Basic Statistical Procedures |
|
|
446 | (18) |
|
|
446 | (1) |
|
Select the Statistical Test and the Significance Level |
|
|
447 | (1) |
|
Select the Samples and Collect the Data |
|
|
447 | (1) |
|
Find the Region of Rejection |
|
|
447 | (1) |
|
Calculate the Test Statistic |
|
|
447 | (1) |
|
Make the Statistical Decision |
|
|
448 | (1) |
|
|
449 | (1) |
|
|
449 | (2) |
|
Assumptions of the RM ANOVA |
|
|
451 | (2) |
|
Dealing with a Lack of Sphericity |
|
|
453 | (3) |
|
|
456 | (1) |
|
Varieties of Repeated-Measures and Randomized-Blocks Designs |
|
|
457 | (1) |
|
Publishing the Results of an RM ANOVA |
|
|
458 | (2) |
|
|
460 | (1) |
|
|
461 | (3) |
|
|
464 | (8) |
|
|
464 | (1) |
|
|
464 | (3) |
|
|
467 | (2) |
|
|
469 | (1) |
|
|
470 | (1) |
|
|
471 | (1) |
|
Two-Way Mixed Design ANOVA |
|
|
472 | (35) |
|
|
472 | (10) |
|
The One-Way RM ANOVA Revisited |
|
|
472 | (2) |
|
Converting the One-Way RM ANOVA to a Mixed Design ANOVA |
|
|
474 | (3) |
|
Two-Way Interaction in the Mixed Design ANOVA |
|
|
477 | (1) |
|
Summarizing the Mixed Design ANOVA |
|
|
478 | (1) |
|
|
479 | (1) |
|
The Varieties of Mixed Designs |
|
|
479 | (2) |
|
|
481 | (1) |
|
|
482 | (1) |
|
Basic Statistical procedures |
|
|
482 | (15) |
|
|
483 | (1) |
|
Select the Statistical Test and the Significance Level |
|
|
483 | (1) |
|
Select the Samples and Collect the Data |
|
|
483 | (1) |
|
Find the Regions of Rejection |
|
|
484 | (1) |
|
Calculate the Test Statistics |
|
|
485 | (2) |
|
Make the Statistical Decisions |
|
|
487 | (1) |
|
|
488 | (1) |
|
Publishing the Results of a Mixed ANOVA |
|
|
489 | (1) |
|
Assumptions of the Mixed Design ANOVA |
|
|
489 | (2) |
|
Dealing with a Lack of Sphericity in Mixed Designs |
|
|
491 | (1) |
|
A Special Case: The Before-After Mixed Design |
|
|
491 | (1) |
|
|
492 | (1) |
|
An Excerpt from the Psychological Literature |
|
|
493 | (1) |
|
|
494 | (1) |
|
|
495 | (2) |
|
|
497 | (10) |
|
Post Hoc Comparisons When the Two Factors Interact |
|
|
497 | (2) |
|
Planned and Complex Comparisons |
|
|
499 | (1) |
|
Removing Error Variance from Counterbalanced Designs |
|
|
500 | (3) |
|
|
503 | (1) |
|
|
504 | (1) |
|
|
505 | (2) |
Part Six Multiple Regression and Its Connection to ANOVA |
|
507 | (104) |
|
|
507 | (56) |
|
|
507 | (20) |
|
|
508 | (1) |
|
The Standardized Regression Equation |
|
|
509 | (1) |
|
More Than Two Mutually Uncorrelated Predictors |
|
|
509 | (1) |
|
|
510 | (1) |
|
Two Correlated Predictors |
|
|
510 | (1) |
|
|
511 | (2) |
|
Completely Redundant Predictors |
|
|
513 | (1) |
|
Partial Regression Slopes |
|
|
513 | (2) |
|
|
515 | (1) |
|
|
515 | (1) |
|
Calculating the Semipartial Correlation |
|
|
516 | (1) |
|
|
517 | (1) |
|
|
518 | (1) |
|
The Raw-Score Prediction Formula |
|
|
519 | (1) |
|
|
520 | (2) |
|
Finding the Best Prediction Equation |
|
|
522 | (1) |
|
Hierarchical (Theory-Based) Regression |
|
|
523 | (1) |
|
|
524 | (1) |
|
|
525 | (2) |
|
Basic Statistical Procedures |
|
|
527 | (21) |
|
The Significance Test for Multiple R |
|
|
527 | (1) |
|
Tests for the Significance of Individual Predictors |
|
|
528 | (1) |
|
|
529 | (2) |
|
|
531 | (1) |
|
|
532 | (1) |
|
The Misuse of Stepwise Regression |
|
|
533 | (1) |
|
Problems Associated with Having Many Predictors |
|
|
533 | (4) |
|
|
537 | (1) |
|
|
537 | (1) |
|
Basic Assumptions of Multiple Regression |
|
|
537 | (3) |
|
Regression with Dichotomous Predictors |
|
|
540 | (1) |
|
Multiple Regression as a Research Tool |
|
|
541 | (3) |
|
Publishing the Results of Multiple Regression |
|
|
544 | (1) |
|
|
545 | (1) |
|
|
546 | (2) |
|
|
548 | (15) |
|
Dealing with Curvilinear Relationships |
|
|
548 | (3) |
|
|
551 | (2) |
|
Multiple Regression with a Dichotomous Criterion |
|
|
553 | (3) |
|
|
556 | (3) |
|
|
559 | (1) |
|
|
560 | (1) |
|
|
561 | (2) |
|
The Regression Approach to ANOVA |
|
|
563 | (48) |
|
|
563 | (14) |
|
|
564 | (1) |
|
|
564 | (1) |
|
|
565 | (1) |
|
|
566 | (1) |
|
Equivalence of Testing ANOVA and R2 |
|
|
566 | (1) |
|
Two-Way ANOVA as Regression |
|
|
567 | (2) |
|
The GLM for Higher-Order ANOVA |
|
|
569 | (1) |
|
Analyzing Unbalanced Designs |
|
|
570 | (3) |
|
Methods for Controlling Variance |
|
|
573 | (2) |
|
|
575 | (1) |
|
|
576 | (1) |
|
Basic Statistical Procedures |
|
|
577 | (23) |
|
Simple ANCOVA as Multiple Regression |
|
|
578 | (2) |
|
The Linear Regression Approach to ANCOVA |
|
|
580 | (8) |
|
|
588 | (1) |
|
Performing ANCOVA by Multiple Regression |
|
|
589 | (1) |
|
|
589 | (1) |
|
The Assumptions of ANCOVA |
|
|
590 | (1) |
|
Additional Considerations |
|
|
591 | (1) |
|
|
592 | (1) |
|
Using Two or More Covariates |
|
|
592 | (1) |
|
|
593 | (2) |
|
Using ANCOVA with Intact Groups |
|
|
595 | (1) |
|
|
596 | (1) |
|
|
597 | (3) |
|
|
600 | (11) |
|
|
600 | (7) |
|
|
607 | (1) |
|
|
608 | (1) |
|
|
609 | (2) |
Part Seven Non Parametric Statistics |
|
611 | (78) |
|
The Binomial Distribution |
|
|
611 | (23) |
|
|
611 | (9) |
|
The Origin of the Binomial Distribution |
|
|
612 | (1) |
|
The Binomial Distribution with N = 4 |
|
|
613 | (1) |
|
The Binomial Distribution with N = 12 |
|
|
614 | (1) |
|
When the Binomial Distribution Is Not Symmetrical |
|
|
615 | (1) |
|
The Normal Approximation to the Binomial Distribution |
|
|
616 | (1) |
|
The z Test for Proportions |
|
|
617 | (1) |
|
|
618 | (1) |
|
|
619 | (1) |
|
Basic Statistical Procedures |
|
|
620 | (6) |
|
|
620 | (1) |
|
Select the Statistical Test and the Significance Level |
|
|
620 | (1) |
|
Select the Samples and Collect the Data |
|
|
620 | (1) |
|
Find the Region of Rejection |
|
|
621 | (1) |
|
Calculate the Test Statistic |
|
|
621 | (1) |
|
Make the Statistical Decision |
|
|
621 | (1) |
|
|
622 | (1) |
|
Assumptions of the Sign Test |
|
|
622 | (1) |
|
|
623 | (1) |
|
When to Use the Binomial Distribution for Null Hypothesis Testing |
|
|
623 | (2) |
|
|
625 | (1) |
|
|
626 | (1) |
|
|
626 | (8) |
|
The Classical Approach to Probability |
|
|
626 | (1) |
|
The Rules of Probability Applied to Discrete Variables |
|
|
627 | (1) |
|
Permutations and Combinations |
|
|
628 | (2) |
|
Constructing the Binomial Distribution |
|
|
630 | (1) |
|
The Empirical Approach to Probability |
|
|
631 | (1) |
|
|
631 | (1) |
|
|
632 | (1) |
|
|
633 | (1) |
|
|
634 | (28) |
|
|
634 | (8) |
|
The Multinomial Distribution |
|
|
634 | (1) |
|
The Chi-Square Distribution |
|
|
635 | (1) |
|
Expected and Observed Frequencies |
|
|
635 | (1) |
|
|
636 | (1) |
|
Critical Values of Chi-Square |
|
|
636 | (1) |
|
Tails of the Chi-Square Distribution |
|
|
637 | (1) |
|
Expected Frequencies Based on No Preference |
|
|
638 | (1) |
|
The Varieties of One-Way Chi-Square Tests |
|
|
639 | (2) |
|
|
641 | (1) |
|
|
641 | (1) |
|
Basic Statistical Procedures |
|
|
642 | (10) |
|
Two-Variable Contingency Tables |
|
|
642 | (1) |
|
Pearson's Chi-Square Test of Association |
|
|
643 | (1) |
|
An Example of Hypothesis Testing with Categorical Data |
|
|
643 | (3) |
|
The Simplest Case: 2 x 2 Tables |
|
|
646 | (1) |
|
Assumptions of the Chi-Square Test |
|
|
647 | (1) |
|
Some Uses for the Chi-Square Test for Independence |
|
|
648 | (1) |
|
Publishing the Results of a Chi-Square Test |
|
|
649 | (1) |
|
|
650 | (1) |
|
|
650 | (2) |
|
|
652 | (10) |
|
Measuring Strength of Association |
|
|
652 | (3) |
|
Measuring Interrater Agreement When Using Nominal Scales |
|
|
655 | (2) |
|
|
657 | (1) |
|
Contingency Tables Involving More Than Two Variables |
|
|
658 | (1) |
|
|
659 | (1) |
|
|
660 | (1) |
|
|
660 | (2) |
|
Statistical Tests for Ordinal Data |
|
|
662 | (27) |
|
|
662 | (7) |
|
|
662 | (1) |
|
Comparing the Ranks from Two Separate Groups |
|
|
662 | (1) |
|
|
663 | (1) |
|
|
663 | (1) |
|
|
664 | (1) |
|
When to Use the Mann-Whitney Test |
|
|
665 | (2) |
|
Repeated Measures or Matched Samples |
|
|
667 | (1) |
|
|
667 | (1) |
|
|
668 | (1) |
|
Basic Statistical Procedures |
|
|
669 | (12) |
|
Testing for a Difference in Ranks between Two Independent Groups: The Mann-Whitney Test |
|
|
669 | (3) |
|
Ranking the Differences between Paired Scores: The Wilcoxon Signed-Ranks Test |
|
|
672 | (4) |
|
Correlation with Ordinal Data: The Spearman Correlation Coefficient |
|
|
676 | (2) |
|
|
678 | (1) |
|
|
679 | (2) |
|
|
681 | (8) |
|
Testing for Differences in Ranks among Several Groups: The Kruskal-Wallis Test |
|
|
681 | (1) |
|
Testing for Differences in Ranks among Matched Subjects: The Friedman Test |
|
|
682 | (2) |
|
Kendall's Coefficient of Concordance |
|
|
684 | (2) |
|
|
686 | (1) |
|
|
686 | (1) |
|
|
687 | (2) |
Appendix A Statistical Tables |
|
689 | (20) |
|
A.1 Areas under the Standard Normal Distribution |
|
|
689 | (3) |
|
A.2 Critical Values of the t Distribution |
|
|
692 | (1) |
|
A.3 Power as a Function of δ and α |
|
|
693 | (1) |
|
A.4 δ As a Function of α and Power |
|
|
694 | (1) |
|
A.5 Critical Values of Pearson's r |
|
|
695 | (1) |
|
A.6 Table of Fisher's Transformation from r to Z |
|
|
696 | (1) |
|
A.7 Critical Values of the F Distribution for α = .05 |
|
|
697 | (1) |
|
A.8 Critical Values of the F Distribution for α = .025 |
|
|
698 | (1) |
|
A.9 Critical Values of the F Distribution for α = .01 |
|
|
699 | (1) |
|
A.10 Power of ANOVA for α = .05 |
|
|
700 | (1) |
|
A.11 Critical Values of the Studentized Range Statistic for α = .05 |
|
|
701 | (1) |
|
A.12 Orthogonal Polynomial Trend Coefficients |
|
|
702 | (1) |
|
A.13 Probabilities of the Binomial Distribution for P = .5 |
|
|
703 | (1) |
|
A.14 Critical Values of the X2 Distribution |
|
|
704 | (1) |
|
A.15 Critical Values for the Wilcoxon Rank-Sum Test |
|
|
705 | (2) |
|
A.16 Critical Values for the Wilcoxon Signed-Ranks Test |
|
|
707 | (2) |
Appendix B Answers to Selected Exercises |
|
709 | (18) |
References |
|
727 | (6) |
Index |
|
733 | |