F
One-Way ANOVA Calculator Statistical Analysis Tool · Fisher, 1921
F-Distribution APA 7th
Configure groups
Group descriptive statistics
ANOVA summary table
Critical F-values
dfbetween = k−1  ·  dfwithin = N−k  ·  k = number of groups, N = total observations
Group means with error bars (±1 SD)
Distribution overview — group spread
Effect size
Post-hoc pairwise comparisons
APA 7th edition reporting statement
Copy-paste template
Calculation details
Interpretation Report
Step-by-step analysis & plain-language conclusions
1
Hypotheses
2
Assumptions check
3
Computed statistical values
4
Statistical decision
5
Effect size & practical significance
6
Post-hoc analysis
7
Conclusion & reporting
What is One-Way ANOVA?
Developed by Ronald A. Fisher (1921), Analysis of Variance (ANOVA) tests whether the means of two or more independent groups are significantly different from one another. The "One-Way" designation means there is a single independent variable (factor) with multiple levels (groups). ANOVA partitions the total variance in the dependent variable into two components: variance between groups (explained by group membership) and variance within groups (random error).
F = MS_between / MS_within where: SS_between = Σ nⱼ(x̄ⱼ − x̄grand)² [sum of squares between groups] SS_within = ΣΣ (xᵢⱼ − x̄ⱼ)² [sum of squares within groups] df_between = k − 1 [k = number of groups] df_within = N − k [N = total observations] MS_between = SS_between / df_between MS_within = SS_within / df_within F = MS_between / MS_within
Assumptions
  • Independence of observations — each participant contributes one score to one group only. Scores are not related across groups.
  • Normality — the dependent variable is approximately normally distributed within each group. ANOVA is robust to violations with large samples (n ≥ 30 per group). Test with Shapiro-Wilk (small samples) or inspect histograms/Q-Q plots.
  • Homogeneity of variances (homoscedasticity) — population variances are approximately equal across all groups. Test with Levene's test or Bartlett's test. If violated, use Welch's ANOVA instead.
  • Continuous dependent variable — the outcome must be measured at interval or ratio level. Use Kruskal-Wallis H-test for ordinal data.
  • Minimum sample size — at least 2 observations per group; practically, ≥ 5 per group for stable estimates.
Effect size measures
  • Eta-squared (η²) — η² = SS_between / SS_total. Proportion of total variance explained by group membership. Ranges 0–1. Benchmarks: small = 0.01, medium = 0.06, large = 0.14 (Cohen, 1988). Note: η² overestimates population effect size in small samples.
  • Omega-squared (ω²) — ω² = (SS_between − df_between × MS_within) / (SS_total + MS_within). Less biased estimate of population effect size. Preferred over η² for reporting. Same benchmarks apply.
  • Cohen's f — f = √(η² / (1 − η²)). Benchmarks: small = 0.10, medium = 0.25, large = 0.40 (Cohen, 1988). Used for power analysis (G*Power).
Post-hoc tests: when and why
A significant ANOVA F-test tells you that at least one group mean differs, but not which pairs differ. Post-hoc tests control the family-wise error rate (FWER) while making all possible pairwise comparisons.
  • Tukey's Honestly Significant Difference (HSD) — most widely recommended when group sizes are equal or near-equal. Controls FWER at exactly α. Computes the minimum mean difference required for significance. Reference: Tukey (1949).
  • Bonferroni correction — divides α by the number of comparisons. More conservative than Tukey. Best when comparisons are planned (a priori), not exploratory. Reference: Dunn (1961).
  • Games-Howell — used when variances are unequal (Levene's test is significant). Does not assume homoscedasticity. Reference: Games & Howell (1976).
  • Scheffé test — most conservative; appropriate for complex comparisons (not just pairwise). Use only when planned post-hoc exploration is necessary. Reference: Scheffé (1953).
APA 7th edition reporting format
Template: "A one-way ANOVA was conducted to examine the effect of [IV] on [DV]. The results indicated a [significant/non-significant] difference between groups, F([df_between], [df_within]) = [F value], p = [p value], η² = [eta-squared]. [Post-hoc results if applicable]."
Report F-values to two decimal places. Report exact p-values (e.g., p = .032) unless p < .001, in which case write p < .001. Always report effect size. If ANOVA is significant, add a sentence summarizing post-hoc results.
Primary references
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Erlbaum Associates.
Field, A. (2018). Discovering statistics using IBM SPSS statistics (5th ed.). SAGE Publications.
Fisher, R. A. (1921). On the "probable error" of a coefficient of correlation deduced from a small sample. Metron, 1, 3–32.
Fisher, R. A. (1925). Statistical methods for research workers. Oliver and Boyd.
Fraenkel, J. R., Wallen, N. E., & Hyun, H. H. (2019). How to design and evaluate research in education (10th ed.). McGraw-Hill.
Tukey, J. W. (1949). Comparing individual means in the analysis of variance. Biometrics, 5(2), 99–114.
Games, P. A., & Howell, J. F. (1976). Pairwise multiple comparison procedures with unequal n's and/or variances. Journal of Educational Statistics, 1(2), 113–125.