What Is Chapter 3 and Why Does It Matter?
Chapter 3 is the methodology chapter of your thesis or dissertation. Its purpose is straightforward: to explain, in precise detail, how you conducted your study. It answers the questions your committee and future readers will ask before they trust your findings — What was your research design? Who were your participants? How did you collect data? How did you analyse it?
Among all the chapters students submit for committee review, Chapter 3 generates the most feedback and the most revisions. The reason is not that it is the longest chapter — it rarely is. The reason is that every methodological choice must be justified, not merely described. Saying "I used a survey" is a description. Explaining why a survey was the most appropriate instrument for your research questions, given your paradigm and population, is justification. The distinction matters enormously at the proposal defense stage.
Creswell and Creswell (2018) describe the methodology chapter as the point where the researcher's philosophical assumptions — their worldview — become visible in the practical decisions they make. A researcher who holds a positivist worldview will design a study very differently from one who holds an interpretivist worldview, and Chapter 3 is where those differences become concrete.
Important Distinction
Chapter 3 describes what you will do (in a proposal) or what you did (in a completed thesis). Proposals use future tense throughout. Completed dissertations use past tense. Mixing tenses within Chapter 3 is one of the most commonly flagged errors during panel review.
The Standard Structure of Chapter 3
While universities differ in their specific formatting requirements, the core components of Chapter 3 are consistent across disciplines and citation styles. The structure below reflects the framework recommended by Creswell and Creswell (2018) and is consistent with the research onion model described by Saunders, Lewis, and Thornhill (2019).
Some institutions add a section on ethical considerations as a standalone component rather than embedding it within the data collection procedure. Check your university's graduate manual to confirm local expectations before finalising your structure.
Section 1: Research Design
The research design is the overarching plan that guides your entire study. It determines how you will collect and analyse data, and it must be logically connected to your research questions. Creswell and Creswell (2018) identify three broad research designs: quantitative, qualitative, and mixed methods.
Quantitative Research Design
Quantitative research is appropriate when your research questions ask about relationships, differences, or frequencies among measurable variables. It operates from a post-positivist philosophical paradigm — the assumption that an objective reality exists and can be measured, even if imperfectly. Common quantitative designs include descriptive surveys, correlational studies, causal-comparative studies, and experimental or quasi-experimental designs.
Qualitative Research Design
Qualitative research explores meaning, experience, and social phenomena that cannot be reduced to numbers. It is suited to research questions that ask "how" or "why" about complex human experiences. Common qualitative designs include phenomenology, grounded theory, case study, ethnography, and narrative inquiry.
Mixed Methods Design
Mixed methods research integrates both quantitative and qualitative approaches within a single study. Saunders, Lewis, and Thornhill (2019) note that mixed methods are particularly powerful when neither approach alone is sufficient to answer the research questions fully. The integration can be sequential (one method informing the other) or concurrent (both methods used simultaneously).
Writing Tip
Do not simply name your research design. State it, define it briefly, and then explain — in one to two sentences — why it is the most appropriate choice for your specific research questions. Your justification should reference the nature of the questions, not personal preference.
Figure 1. The Research Onion model illustrates how each methodological decision — from philosophy to data collection — must be internally consistent. Source: Saunders, Lewis & Thornhill (2019).
Section 2: Population and Sampling
The population is the full group to which your study's findings are intended to apply. The sample is the subset of that population from which you will actually collect data. Chapter 3 must clearly define both, explain how the sample was drawn, and justify the sample size.
Defining the Target Population
Be specific. "Students in the Philippines" is not a target population — it is a universe. "Third-year Bachelor of Secondary Education students enrolled at a state university in Region VII during the academic year 2024–2025" is a target population. The more precisely you define it, the more defensible your sampling decisions become.
Sampling Techniques
Sampling techniques fall into two broad categories: probability sampling (where every member of the population has a known, non-zero chance of selection) and non-probability sampling (where selection is based on convenience, judgment, or other non-random criteria).
| Technique | Type | When to Use |
|---|---|---|
| Simple Random Sampling | Probability | When you have a complete list of the population and need unbiased selection. |
| Stratified Random Sampling | Probability | When subgroups (strata) within the population must be proportionally represented. |
| Systematic Sampling | Probability | When you select every nth member from a list — practical for large populations. |
| Purposive Sampling | Non-Probability | When specific characteristics are required in participants, common in qualitative research. |
| Convenience Sampling | Non-Probability | When accessibility drives selection. Acknowledge the limitation of generalisability. |
Sample Size Determination
For quantitative studies involving finite populations, the most commonly used formula in undergraduate and graduate research is Slovin's formula:
Slovin's Formula
n = N ÷ (1 + Ne²), where N = total population size and e = margin of error (typically 0.05 for 95% confidence).
Example: If N = 650 students and e = 0.05, then n = 650 ÷ (1 + 650 × 0.0025) = 650 ÷ 2.625 = 248 respondents.
Use the Research Innovation Hub Sample Size Calculator to compute your required sample size automatically with step-by-step output.
Section 3: Research Instrument
The research instrument is the tool through which you collect your data. In quantitative studies, this is most often a structured questionnaire or a standardised test. In qualitative studies, it is typically an interview guide or observation protocol. The instrument section of Chapter 3 must describe the tool, explain how it was constructed or sourced, and provide evidence that it produces valid and reliable data.
Validity
Validity refers to the degree to which an instrument measures what it claims to measure. For student-designed questionnaires, content validity is established through expert review — typically a panel of three to five subject-matter experts who evaluate each item for relevance, clarity, and alignment with the research objectives. Their feedback is used to revise items before administration. The Content Validity Index (CVI) is often used to quantify panel agreement.
Reliability
Reliability refers to the consistency of the instrument across administrations. For Likert-scale questionnaires, Cronbach's Alpha is the standard reliability measure. An alpha coefficient of 0.70 or above is generally considered acceptable for research purposes; coefficients above 0.80 indicate strong internal consistency. Reliability testing is conducted during a pilot study administered to a small subset of the target population (typically 20 to 30 participants) who are excluded from the main study.
Common Error
Many students adopt standardised instruments without establishing that those instruments are appropriate for their specific population. If you use an instrument developed and validated in a Western university context and your participants are students in Southeast Asia, you must address cultural transferability and re-establish reliability with your own pilot data.
Section 4: Data Collection Procedure
The data collection procedure is a sequential, step-by-step account of exactly how data was gathered. Its purpose is twofold: to demonstrate methodological rigour, and to provide sufficient detail for replication. A reader should be able to follow your procedure and reproduce your study.
A well-written data collection procedure covers the following, in order:
- Ethics clearance. State that institutional ethics approval was obtained, or describe the equivalent process at your institution (Dean's approval, departmental review, or IRB clearance).
- Participant recruitment. Explain how participants were identified, approached, and invited. Include a description of how informed consent was obtained and documented.
- Instrument administration. Describe how questionnaires or instruments were distributed — in person, via Google Forms, through class representatives — and the timeline for data collection.
- Data retrieval. Explain how completed instruments were collected and how incomplete or unusable responses were handled.
- Confidentiality measures. Describe how participant anonymity was protected, including how data files were stored and who had access to them.
Saunders, Lewis, and Thornhill (2019) emphasise that research ethics is not a procedural formality. Voluntary participation, informed consent, and the right to withdraw must be genuinely operationalised — not listed as assurances without corresponding procedural safeguards.
Section 5: Data Analysis
The data analysis section explains the statistical or interpretive techniques used to process the collected data and produce findings. In quantitative research, this means specifying which inferential or descriptive statistics were applied and why. Every statistical technique must be linked back to a specific research question or hypothesis.
Descriptive Statistics
Descriptive statistics summarise and describe the characteristics of the sample. Measures of central tendency (mean, median, mode) and measures of dispersion (standard deviation, range, variance) are typically presented for each major variable before inferential analysis is conducted. These give readers a clear picture of the data before relationships or differences are examined.
Inferential Statistics
Inferential statistics allow you to draw conclusions about a population based on sample data. The choice of inferential test depends on the nature of your research questions, the number of groups being compared, the measurement scale of your variables, and whether the assumptions of normality and homogeneity of variance are met.
| Research Question Type | Recommended Test | Variable Type |
|---|---|---|
| Significant difference between two groups | Independent Samples T-Test | Continuous dependent variable |
| Significant difference across three or more groups | One-Way ANOVA | Continuous dependent variable |
| Relationship between two continuous variables | Pearson Correlation | Two continuous variables |
| Relationship between ranked or ordinal variables | Spearman Rank Correlation | Ordinal variables |
| Predictive relationship (IV predicts DV) | Simple or Multiple Regression | Continuous variables |
| Association between categorical variables | Chi-Square Test of Independence | Nominal or categorical variables |
Use the Research Innovation Hub Statistical Tools to run Pearson correlation, T-tests, ANOVA, regression, and more — directly in your browser, with step-by-step output formatted for thesis submission.
Full Worked Example: Student Academic Performance
The following example illustrates how the five components of Chapter 3 apply to a specific quantitative study. This example can serve as a structural model; your own Chapter 3 must be written to reflect your actual study.
Study Context
Working title: The Relationship Between Study Habits and Academic Performance Among Third-Year Students of a State University in Region VII, Philippines.
Main research question: Is there a significant relationship between student study habits and their general weighted average (GWA)?
Research Design (Worked Example)
This study employed a quantitative, descriptive-correlational research design. A descriptive approach was used to determine the level of study habits and academic performance among participants. A correlational approach was used to examine the nature and strength of the relationship between these two variables. This design is consistent with the post-positivist paradigm, which holds that the relationship between measurable variables can be objectively determined through systematic data collection and statistical analysis (Creswell & Creswell, 2018).
Population and Sampling (Worked Example)
The target population consisted of 640 third-year students enrolled across five colleges at the university during the first semester of Academic Year 2024–2025. Using Slovin's formula with a 5% margin of error, the required sample size was computed as follows:
Sample Size Computation
n = 640 ÷ (1 + 640 × 0.05²) = 640 ÷ (1 + 1.6) = 640 ÷ 2.6 = 246 students
Proportional stratified random sampling was used to ensure that each college was represented in proportion to its share of the total population. A sampling frame (class list) was obtained from the Registrar's Office, and respondents within each stratum were selected using a random number generator.
Research Instrument (Worked Example)
Data were collected using a researcher-adapted questionnaire composed of two parts. Part I gathered demographic information (sex, college, and academic scholarship status). Part II contained 30 items measuring study habits across five dimensions — time management, note-taking practices, reading strategies, test preparation, and learning environment — rated on a five-point Likert scale ranging from 1 (Never) to 5 (Always). Academic performance was measured using the students' officially recorded General Weighted Average (GWA) for the previous semester, obtained from the Registrar's Office.
Content validity was established through a panel of five faculty members with expertise in educational psychology. Items with a Content Validity Index (CVI) below 0.80 were revised or removed. Reliability was determined through a pilot study involving 30 students from a neighbouring state university. The Cronbach's Alpha coefficient was 0.87, indicating strong internal consistency.
Data Collection Procedure (Worked Example)
Prior to data collection, ethical clearance was secured from the university's Research Ethics Committee. Written permission was obtained from the Dean of each college. Participants received an informed consent form explaining the study's purpose, the voluntary nature of participation, and their right to withdraw without consequence. Questionnaires were administered personally by the researcher during vacant class periods, with a collection period of three weeks (March 3–21, 2025). Completed questionnaires were checked for completeness before acceptance; four incomplete forms were excluded, yielding a final sample of 242 usable responses.
Data Analysis (Worked Example)
Data were encoded in Microsoft Excel and analysed using IBM SPSS Statistics Version 29. Descriptive statistics — mean and standard deviation — were computed for study habit scores and GWA. The normality of score distributions was assessed using the Shapiro-Wilk test. Because both variables were continuous and normally distributed, Pearson Product-Moment Correlation was used to test the relationship between study habits and academic performance. The level of significance was set at α = 0.05. Correlation strength was interpreted using the classification by Evans (1996): coefficients of 0.00–0.19 (very weak), 0.20–0.39 (weak), 0.40–0.59 (moderate), 0.60–0.79 (strong), and 0.80–1.00 (very strong).
Use the Research Innovation Hub Pearson Correlation calculator to analyse your data and generate APA-formatted output for Chapter 4.
Common Mistakes to Avoid in Chapter 3
1. Describing Without Justifying
Every methodological decision must be accompanied by a rationale. "A survey was used to collect data" is insufficient. "A self-administered survey was selected because it allows efficient data collection from a large, geographically concentrated sample and produces quantifiable data appropriate for statistical analysis" is a justification.
2. Mismatching Design to Research Questions
If your research question asks "What is the lived experience of first-generation college students during the thesis-writing process?" — a Likert-scale survey is not the appropriate instrument. Your design must logically follow from your questions. Creswell and Creswell (2018) emphasise that the alignment between research questions, design, and analysis is one of the primary criteria by which committee members evaluate a proposal's credibility.
3. Omitting Validity and Reliability Evidence
Claiming that your questionnaire is valid and reliable without providing the data to support that claim — CVI scores, Cronbach's Alpha coefficient, pilot study details — is a gap that will be flagged at every stage of committee review.
4. Treating Ethics as a Formality
A single sentence stating "ethical considerations were observed" is not sufficient. Name the specific measures: informed consent, voluntary participation, anonymity of responses, data security, and the right to withdraw. Each must correspond to an actual procedural step described elsewhere in the section.
5. Writing in the Wrong Tense
Proposals are written in future tense ("participants will be selected…"). Completed theses are written in past tense ("participants were selected…"). Switching between tenses within Chapter 3 is a common proofreading failure that signals lack of care and reduces academic credibility.
Before You Submit Your Proposal
Read Chapter 3 against your research questions one final time. Every section — design, sampling, instrument, data collection, analysis — should be traceable back to at least one of your stated research questions. If a methodological decision cannot be connected to a research question, it may not belong in your study.
References
Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). SAGE Publications.
Evans, J. D. (1996). Straightforward statistics for the behavioral sciences. Brooks/Cole.
Saunders, M., Lewis, P., & Thornhill, A. (2019). Research methods for business students (8th ed.). Pearson Education.
Continue Reading
Now that you have a clear understanding of how to write Chapter 3, the following resources will help you move forward: