Home Modules Module 03
MODULE 03 · INTERMEDIATE

Research Design &
Methodology

The chapter most research students dread — completely demystified. A comprehensive, peer-reviewed guide covering every decision you need to make from research design to data collection to establishing that your study is trustworthy.

10–12 hours of reading
5 major topics
4 interactive quizzes
1 decision tree
Undergraduate to PhD
01
Foundation

What is Research Design?

The blueprint that connects your research questions to your methods, your data, and your conclusions.

Research design is the overall strategy you select to integrate the different components of your study in a coherent and logical way. According to John Creswell and J. David Creswell (2018), research design is the intersection of philosophical worldviews, strategies of inquiry, and specific research methods. It is not simply a "method" — it is the entire plan that guides every subsequent decision you make. Creswell & Creswell, 2018, p. 4

The design you choose must follow logically from your research problem. As Earl Babbie (2020) explains in The Practice of Social Research, a research design "determines which observations to make, how to make them, and how to relate them to one another." Students who skip the design phase often find themselves midway through data collection realizing their method cannot actually answer their research question.

"A research design is a plan for collecting and analyzing evidence that will make it possible for the investigator to answer whatever questions he or she has posed."

— Charles Ragin, Redesigning Social Inquiry (2008, p. 7)

The Three Levels of Research Design

Research design operates at three interconnected levels that you must address in your methodology chapter:

LEVEL 01 · PHILOSOPHY

Philosophical Worldview

Your underlying assumptions about the nature of reality (ontology) and knowledge (epistemology). Are you a positivist, interpretivist, or pragmatist? This shapes everything downstream.

LEVEL 02 · STRATEGY

Strategy of Inquiry

The broad approach — are you doing a case study, ethnography, grounded theory, survey, experiment, or mixed methods study? Each has established procedures and traditions.

LEVEL 03 · METHODS

Research Methods

The specific data collection tools (interviews, surveys, observations), analytical procedures (thematic analysis, ANOVA, regression), and practices you will use in your study.

FOUNDATIONAL SOURCE

Creswell, J.W. & Creswell, J.D. (2018) identify four major philosophical worldviews in research: Postpositivism (scientific method, numerical data), Constructivism (multiple realities, meaning-making), Transformative (advocacy, marginalized groups), and Pragmatism (real-world consequences, mixed methods). Your worldview should align with your research questions.

Creswell, J.W. & Creswell, J.D. (2018). Research Design: Qualitative, Quantitative, and Mixed Methods Approaches (5th ed.). SAGE Publications.

The Research Design Decision: Where Students Go Wrong

The most common mistake in dissertation writing is selecting a research design based on what the student is "comfortable with" rather than what the research question demands. Your design must serve the question — not the other way around. Norman Denzin and Yvonna Lincoln (2018) stress that the relationship between questions and methods is reciprocal: your questions narrow the range of appropriate designs, but your chosen design also clarifies and sharpens your questions. Denzin & Lincoln, 2018

COMMON MISTAKE

Do not write: "I used a qualitative approach because I wanted to explore the phenomenon deeply." This justification is circular and weak.

Write instead: "A qualitative approach was selected because the research question seeks to understand the lived experiences and meaning-making processes of participants — a form of inquiry that cannot be adequately addressed through numerical measurement (Creswell, 2014; Denzin & Lincoln, 2018)."

02
Research Approach

Qualitative Research

Understanding the "how" and "why" through words, meanings, and lived experiences.

Qualitative research is defined by Creswell (2014) as "an approach for exploring and understanding the meaning individuals or groups ascribe to a social or human problem." It is not simply research that lacks numbers — it is a methodologically distinct tradition with its own logic, quality criteria, and ways of establishing trustworthiness. Creswell, 2014, p. 4

John Bryman (2016) in Social Research Methods (5th ed.) describes qualitative research as prioritizing words over quantities, an inductive orientation, and a constructionist view of reality. This does not make it "less scientific" — it makes it suited for different kinds of questions.

Major Qualitative Research Designs

QUALITATIVE DESIGN

Phenomenology

Describes the "lived experience" of individuals around a phenomenon. Asks: What is the essence of this experience? Associated with Husserl and later Heidegger. Data: in-depth interviews. Analysis: reduction to essential structures.

Moustakas, 1994; van Manen, 1990
QUALITATIVE DESIGN

Grounded Theory

Develops theory grounded in participant data rather than testing existing theory. Systematic procedures by Glaser & Strauss (1967). Asks: What theory emerges from this data? Data: interviews, observations. Analysis: constant comparison, theoretical saturation.

Strauss & Corbin, 1998; Charmaz, 2014
QUALITATIVE DESIGN

Ethnography

Studies the shared culture of a group over an extended period in their natural setting. Asks: How do people in this context make sense of their world? Data: prolonged observation, field notes, interviews, documents. Associated with cultural anthropology.

Fetterman, 2010; Creswell, 2014
QUALITATIVE DESIGN

Case Study

In-depth investigation of a bounded system (a case) — a person, program, institution, or event. Asks: What happened in this specific case? Data: interviews, documents, observations, artifacts. Particularly common in education and organizational research.

Yin, 2018; Stake, 1995

Qualitative Data Analysis Methods

Analysis Method How It Works Best Used When Design Match
Thematic Analysis Identify, analyze, and report patterns (themes) within data across the dataset Seeking broad patterns; flexible; most widely applicable Any Qualitative
Interpretive Phenomenological Analysis (IPA) Understand how individuals make sense of major lived experiences Small samples; exploring subjective experience in depth Phenomenology
Constant Comparison Compare each new data piece with previous codes, categories, and memos Building theory from data iteratively Grounded Theory
Narrative Analysis Examine how people construct stories to make sense of experience Biographical research; identity studies; life histories Narrative Inquiry
Content Analysis Systematic categorization of text, images, or media into themes Document analysis; media studies; policy analysis Document Analysis
WORKED EXAMPLE

Research Question: "What are the lived experiences of first-generation college students navigating the thesis-writing process?"

Appropriate Design: Phenomenological study using semi-structured in-depth interviews with 8–12 purposively selected participants. Data analyzed using Moustakas' (1994) modified Van Kaam method of analysis. Trustworthiness established through member-checking and thick description.

Why? The question asks about lived experience (phenomenological language), a small population, and subjective meaning — all indicators that qualitative phenomenology is appropriate.

KNOWLEDGE CHECK · QUIZ 1 OF 4
A researcher wants to understand how survivors of academic burnout describe and make sense of their recovery experience. Which qualitative design is most appropriate?
A Grounded Theory, to develop a theory of burnout recovery
B Phenomenology, to describe the essence of the recovery experience
C Ethnography, to observe how survivors interact with their environment
D Case Study, to investigate one person's burnout experience
03
Research Approach

Quantitative Research

Testing hypotheses, measuring variables, and generalizing findings through numbers and statistical analysis.

Quantitative research is defined by Creswell & Creswell (2018) as "an approach for testing objective theories by examining the relationship among variables. These variables can be measured, typically on instruments, so that numbered data can be analyzed using statistical procedures." Creswell & Creswell, 2018, p. 4

Rooted in the positivist tradition, quantitative research assumes that there is an objective reality that can be measured and that findings can be generalized across populations. Earl Babbie (2020) notes that the great strength of quantitative methods is their ability to measure variables precisely, test hypotheses rigorously, and produce findings that can be replicated and generalized — strengths that qualitative research cannot offer in the same way. Babbie, 2020, p. 25

Major Quantitative Research Designs

QUANTITATIVE DESIGN

Experimental Design

The gold standard for establishing cause-and-effect. Random assignment of participants to experimental and control groups. Pre-test and post-test measurements. Manipulation of an independent variable. Provides the strongest causal evidence of any design.

Campbell & Stanley, 1963; Shadish et al., 2002
QUANTITATIVE DESIGN

Quasi-Experimental Design

Like experimental but without random assignment. Used when random assignment is impractical or unethical. Common in educational and social research where intact classrooms or groups are studied. Strong internal validity but weaker than true experiments.

Shadish, Cook & Campbell, 2002
QUANTITATIVE DESIGN

Survey Research

Collects data from a sample to describe or explain population characteristics. Cross-sectional (one time point) or longitudinal (multiple time points). The most widely used quantitative design in social, education, and business research. Requires probability sampling for generalizability.

Fowler, 2014; Dillman et al., 2014
QUANTITATIVE DESIGN

Correlational Research

Examines the relationship between two or more variables without manipulating them. Establishes association, not causation. Predictive correlational studies use regression to predict outcomes. Cannot determine which variable influences the other.

Fraenkel et al., 2019

Statistical Analysis: Choosing the Right Test

One of the most paralysing decisions for quantitative researchers is choosing the correct statistical test. Field (2018), in Discovering Statistics Using IBM SPSS Statistics, provides a practical decision framework based on the nature of your variables and your research question. Field, 2018

Research Question TypeData TypeAppropriate TestWhat It Answers
Difference between 2 independent groups Interval/Ratio Independent Samples t-test Is there a significant mean difference?
Difference within the same group (pre/post) Interval/Ratio Paired Samples t-test Did scores change significantly?
Difference among 3+ groups Interval/Ratio One-Way ANOVA Is there a mean difference across groups?
Relationship between 2 variables Interval/Ratio Pearson Correlation Is there a linear relationship? (r value)
Predicting one variable from another Interval/Ratio Linear Regression How much does X predict Y?
Frequency or category differences Nominal/Categorical Chi-Square Are observed frequencies significantly different from expected?
Relationship between ranked variables Ordinal Spearman Correlation Is there a monotonic relationship?
Effect of 2+ independent variables Interval/Ratio Multiple Regression / MANOVA How do multiple predictors explain variance in the outcome(s)?
Comparison of two related/paired samples (Non-parametric) Ordinal or Non-normal Interval Wilcoxon Signed-Rank Do the medians of two related groups differ?
Summary of central tendency and dispersion Interval or Ratio Mean & Descriptives What is the average and spread of the data?
EXPERT TIP

Always state your alpha level (significance threshold) before analyzing data, not after. The standard is α = .05, meaning you accept a 5% probability of a Type I error (false positive). Some fields use .01 (more stringent). Changing your alpha after seeing results is called "p-hacking" and is a research ethics violation.

Also report effect sizes (Cohen's d, η², r) alongside p-values. Statistical significance does not equal practical significance. A p = .001 with a Cohen's d of 0.08 means the effect is real but trivial. Field, 2018; Cohen, 1988

04
Research Approach

Mixed Methods Research

Integrating quantitative and qualitative approaches for a more complete understanding than either alone can provide.

Mixed methods research is defined by Creswell & Plano Clark (2018) as "an approach to inquiry involving collecting both quantitative and qualitative data, integrating the two forms of data, and using distinct designs that may involve philosophical assumptions and theoretical frameworks." The key term is integration — collecting both types of data in the same study but never meaningfully connecting them is not mixed methods, it is simply two separate studies. Creswell & Plano Clark, 2018, p. 5

"The use of mixed methods provides a more complete picture than either quantitative or qualitative approaches alone."

— Johnson, Onwuegbuzie & Turner (2007, p. 123), Journal of Mixed Methods Research

The Three Core Mixed Methods Designs

MIXED DESIGN

Convergent (Triangulation)

Collect quantitative and qualitative data simultaneously, analyze separately, then compare/merge results. Used to confirm, cross-validate, or corroborate findings from each strand. Equal priority to both strands.

QUAN + QUAL → Compare
MIXED DESIGN

Explanatory Sequential

Quantitative data collected first, analyzed, then qualitative data collected to explain unexpected or significant results. The qualitative phase follows from and explains the quantitative phase. Priority: QUAN → qual.

QUAN → qual (explains)
MIXED DESIGN

Exploratory Sequential

Qualitative data collected first to explore a phenomenon, then quantitative data collected to test or generalize the qualitative findings. Often used to develop instruments based on qualitative themes. Priority: QUAL → quan.

QUAL → quan (generalizes)
WHEN TO USE MIXED METHODS

According to Creswell & Plano Clark (2018), use mixed methods when: (1) one data type is insufficient to understand a problem; (2) you need to explain quantitative results with qualitative depth; (3) you need to develop and test an instrument; (4) you need multiple levels of analysis (e.g., system and individual). Mixed methods is not appropriate just to appear more comprehensive — each strand must serve a specific purpose in answering your research questions.

Research Design Decision Tool
INTERACTIVE · ANSWER 3 QUESTIONS TO FIND YOUR DESIGN
1. What does your primary research question primarily seek to do?
RECOMMENDED DESIGN

05
Research Process

Sampling Techniques

Who you study matters as much as how you study them. Your sampling decision determines whether your findings can generalize beyond your participants.

Sampling is the process of selecting a subset of a population to represent the whole. According to Babbie (2020), good sampling ensures that "the actual descriptions or explanations made on the basis of that sample would also hold for the entire group they represent." Babbie, 2020, p. 185 Poor sampling is one of the most common reasons research findings are challenged during defense.

Sampling strategies are broadly divided into two families: Probability sampling (where every member of the population has a known, non-zero chance of being selected — required for generalization) and Non-probability sampling (where selection is based on purpose, convenience, or judgment — common in qualitative research where representativeness is not the goal).

Click any sampling technique to explore it in depth:

PROBABILITY SAMPLING — for generalization

Simple Random
Probability

Every member of the population has an equal and independent chance of being selected. The purest form of probability sampling.

How: Assign a number to every population member. Use a random number generator or lottery to select your sample.

Method: Lottery draw, random number table, or RNG software
Example: A study of 400 university students selected randomly from a complete student registry of 8,000.

Best for: Homogeneous populations with accessible, complete sampling frames.

Limitation: Requires a complete and current population list. Minority subgroups may be under-represented by chance.

Babbie, 2020; Fraenkel et al., 2019

Systematic Random
Probability

Every nth member of the population is selected after a random start. Simpler to implement than simple random sampling.

How: Calculate sampling interval k = N/n. Select a random start between 1 and k, then select every kth member.

k = N ÷ n → then select every kth case from a random start
Example: Population N = 1,000, needed n = 100. k = 10. Random start = 7. Select: 7, 17, 27, 37…

Caution: Avoid if the population list has a periodic pattern that aligns with your interval (periodicity bias).

Babbie, 2020; Trochim, 2006

Stratified Random
Probability

Divide the population into subgroups (strata) sharing a characteristic, then randomly sample from each stratum. Ensures representation of all subgroups.

How: Identify strata (e.g., year level, gender, department). Determine proportion of each stratum. Randomly sample proportionally.

n per stratum = (stratum size ÷ total population) × total sample size
Example: University has 60% freshmen, 25% sophomores, 15% juniors. For n=200: select 120 freshmen, 50 sophomores, 30 juniors.

Best for: Heterogeneous populations where subgroup comparison is important. More precise than simple random for the same sample size.

Fraenkel et al., 2019; Babbie, 2020

Cluster Random
Probability

Divide the population into clusters (naturally occurring groups), randomly select clusters, then study all members within selected clusters.

How: Identify clusters (e.g., schools, barangays, classrooms). Randomly select clusters. Survey all individuals in selected clusters.

Two-stage: Select clusters randomly → then sample within clusters
Example: Rather than sampling students across 300 schools, randomly select 20 schools and survey all students in those schools.

Best for: Large geographically dispersed populations where individual-level sampling frames don't exist.

Limitation: Less statistically efficient than simple random — requires larger samples for same precision.

Babbie, 2020; Trochim, 2006

NON-PROBABILITY SAMPLING — for qualitative depth

Purposive
Non-Probability

Participants are deliberately selected based on specific characteristics relevant to your research question. The most common sampling method in qualitative research.

Rationale: The goal in qualitative research is information richness, not statistical representation. Purposive sampling maximizes the relevance of participants to the research question.

Example: Selecting five experienced professors known for innovative pedagogy to explore expert teaching practices through in-depth interviews.

Justification wording: "Participants were purposively selected based on [criteria] because they represent information-rich cases relevant to the research phenomenon (Patton, 2015)."

Patton, 2015; Lincoln & Guba, 1985

Snowball
Non-Probability

Initial participants recruit further participants from their networks. Used for hard-to-reach or hidden populations.

How: Start with 1–3 initial participants who meet your criteria. Ask them to refer others they know who also qualify. Continue until theoretical saturation.

Example: A study of undocumented immigrants' educational experiences — initial contact recruits others who trust the researcher through social networks.

Limitation: Can produce biased samples clustered around particular social networks. Transparency about chain of recruitment is required in reporting.

Patton, 2015; Atkinson & Flint, 2001

Theoretical
Non-Probability

Used in Grounded Theory. Sampling continues and evolves as theory develops — participants are selected because of their relevance to emerging theoretical categories.

How: Begin with initial participants. Collect and analyze data simultaneously. Identify emerging concepts. Seek participants who can extend or challenge emerging theory. Stop when theoretical saturation is reached (new data no longer adds new categories).

Example: A grounded theory study of student resilience — early interviews with struggling students; later interviews add faculty, counselors, and high-performing students to develop and test emerging categories.

Strauss & Corbin, 1998; Charmaz, 2014

Slovin's Formula: Calculating Minimum Sample Size

For quantitative surveys with a known and finite population, Slovin's Formula provides the minimum required sample size at a specified margin of error. While modern statisticians recommend Cochran's Formula for more precision, Slovin's Formula remains standard in Philippine and Southeast Asian graduate research. Slovin, 1960; Tejada & Punzalan, 2012

n = N ÷ (1 + Ne²)
WORKED CALCULATION

Population N = 1,200 students; Margin of error e = 0.05 (5%)

n = 1,200 ÷ (1 + 1,200 × 0.0025) = 1,200 ÷ (1 + 3) = 1,200 ÷ 4 = 300 respondents

Important: Add 10–15% to your calculated n to account for anticipated non-response or dropout. For n = 300, target distributing 330–345 questionnaires.

Note: For qualitative research, sample size is determined by theoretical saturation or information redundancy, not by formula. Most qualitative studies use 5–30 participants (Creswell, 2014).

06
Research Process

Data Collection Instruments

Your instrument is the bridge between your constructs and your data. A flawed instrument produces flawed findings — regardless of sample size.

A research instrument is the specific tool used to collect data. According to Fraenkel, Wallen & Hyun (2019), the choice of instrument must align with your research design: questionnaires for surveys, interview guides for qualitative inquiry, observation protocols for ethnography, and standardized tests for experimental designs. Fraenkel et al., 2019

Questionnaire Design: What the Literature Says

Dillman, Smyth & Christian (2014), in Internet, Phone, Mail, and Mixed-Mode Surveys, provide the most comprehensive guidance on questionnaire design. Key principles: minimize cognitive burden, use consistent scale formats, avoid double-barreled questions, sequence general-to-specific, and pilot-test with 5–10 representatives of the target group. Dillman et al., 2014

Likert Scale: The Most Misused Instrument in Social Research

The Likert scale, developed by Rensis Likert (1932), measures the intensity of agreement with a statement. It is ordinal data — not interval data — which has profound implications for your statistical analysis. Norman (2010) and Sullivan & Artino (2013) both argue that Likert data is routinely analyzed inappropriately with parametric tests, producing misleading results. Norman, 2010; Sullivan & Artino, 2013

Scale PointsFormatBest Used WhenLimitation
4-pointStrongly Agree → Strongly Disagree (no midpoint)Forcing a direction; avoiding neutral responsesMay frustrate respondents with genuinely neutral views
5-pointSA → A → N → D → SDStandard academic research; most commonRespondents tend to cluster near the middle
6-pointNo neutral; two positive/negative gradationsWhen you need a direction and more discriminationLess intuitive; requires more cognitive effort
7-pointSA → A → SomA → N → SomD → D → SDGreater precision; research requiring finer discriminationCan overwhelm respondents; higher non-response rates
CRITICAL: LIKERT SCALE ANALYSIS

For individual Likert items: Use non-parametric statistics (median, mode, Mann-Whitney U, Kruskal-Wallis). Individual Likert items are ordinal.

For composite Likert scales (summed scores of multiple items): Many researchers treat these as interval and use parametric tests (t-test, ANOVA, Pearson r). This is acceptable when scales have many items and demonstrate normal distribution, per the "sufficiently robust" argument (Norman, 2010).

Always state in your Chapter 3 which position you are taking and cite the supporting literature. Your panelist may ask. Sullivan & Artino, 2013; Norman, 2010

Interview Guide: Qualitative Data Collection

Kvale & Brinkmann (2015) in InterViews: Learning the Craft of Qualitative Research Interviewing describe the research interview as "a professional conversation that has a structure and a purpose." A semi-structured interview guide provides flexibility while ensuring all key topics are covered. Kvale & Brinkmann, 2015, p. 2

INTERVIEW GUIDE STRUCTURE

Opening (Rapport-building): "Can you tell me a little about your background and your experience with [topic]?" — Sets a comfortable tone, not analyzed as data.

Core questions (6–10): Open-ended, beginning with "how," "what," "describe," or "tell me about." Avoid "why" early — it can feel interrogative. Example: "What has your experience of the thesis supervision process been like?"

Probe questions (prepared): "Can you tell me more about that?" / "What did you mean by [term]?" / "Can you give me an example?" — Written in advance but used flexibly.

Closing: "Is there anything about this topic that we haven't covered that you feel is important?" — Captures unexpected insights and signals respect for participant knowledge.

Kvale & Brinkmann, 2015; Creswell, 2014

Observation Protocol & Document Analysis

Observation is appropriate when the researcher wants to understand behavior in its natural context. According to Angrosino (2007), observations should be recorded systematically in field notes that capture both descriptive notes (what happened) and reflective notes (what it means or how the researcher felt). Observation can be participant (researcher joins the group) or non-participant (researcher is an outsider observer). Angrosino, 2007

Document analysis, described by Bowen (2009) as a "systematic procedure for reviewing or evaluating documents," is used to supplement other data or as a primary data source. Documents may be personal (diaries, letters), official (policy documents, minutes), or public (newspapers, reports). Advantages include non-reactivity and historical depth; limitations include incompleteness and unknown provenance. Bowen, 2009

07
Research Quality

Validity & Reliability

The twin pillars of research quality — without them, your findings cannot be trusted, regardless of your sample size or analytical sophistication.

Validity refers to whether your instrument measures what it is supposed to measure. Reliability refers to whether the measurement produces consistent results over time and across raters. Creswell (2014) succinctly defines the distinction: "Validity is accuracy; reliability is consistency." Creswell, 2014, p. 201

Critically, an instrument can be reliable without being valid — a bathroom scale consistently reading 5kg too heavy is highly reliable but systematically invalid. The reverse (valid but not reliable) is rarer but conceptually possible. You need both.

Validity (Quantitative)
Accuracy — are you measuring what you intend?
Content Validity
Does the instrument cover all aspects of the construct? Established through expert panel review (typically 3–5 subject matter experts). Measured by Content Validity Index (CVI ≥ 0.80).
Construct Validity
Does the instrument measure the theoretical construct it claims to? Tested through Factor Analysis (Exploratory or Confirmatory) or by correlating with established measures of the same construct.
Criterion Validity
Concurrent: scores correlate with another accepted measure at the same time. Predictive: scores predict future performance. Essential for instruments intended to diagnose or predict outcomes.
Face Validity
Do items appear to measure what they should to a non-expert observer? The weakest form — necessary but not sufficient. Often established through pilot testing with target respondents.
Trustworthiness (Qualitative)
Lincoln & Guba's (1985) parallel criteria
Credibility
Parallel to internal validity. Strategies: Member-checking (participants verify findings), prolonged engagement, triangulation (multiple data sources or methods), peer debriefing (external reviewer challenges interpretations).
Transferability
Parallel to external validity. Not the researcher's job to claim generalizability — provide thick description detailed enough for readers to judge applicability to their own contexts.
Dependability
Parallel to reliability. Use an audit trail — document all decisions, changes, and rationale throughout the research process so it can be reviewed by an external auditor.
Confirmability
Parallel to objectivity. Findings are shaped by participants, not researcher bias. Established through reflexivity (examining own biases), negative case analysis (seeking disconfirming evidence).

Reliability in Quantitative Research

Reliability TypeHow It's TestedAcceptable ThresholdKey Reference
Internal ConsistencyCronbach's Alpha (α): measures correlation among all items in a scaleα ≥ 0.70 (acceptable); ≥ 0.80 (good); ≥ 0.90 (excellent)Cronbach, 1951; Nunnally, 1978
Test-Retest ReliabilityAdminister instrument to same group twice (2–4 week interval). Correlate scores.r ≥ 0.70 (acceptable); ≥ 0.80 (good)Fraenkel et al., 2019
Inter-Rater ReliabilityTwo or more independent raters score the same responses. Compute Cohen's Kappa or ICC.Kappa ≥ 0.60 (substantial); ≥ 0.80 (excellent)Cohen, 1960; Landis & Koch, 1977
Parallel Forms ReliabilityTwo equivalent versions of the instrument administered to the same group. Correlate.r ≥ 0.80 (high equivalence)Anastasi, 1988
DISSERTATION TIP — VALIDATOR PROCESS

Step 1: Construct your instrument items mapped to your research objectives and conceptual framework.

Step 2: Submit to 3–5 content experts (subject matter experts, not just any PhD) with a validation form asking them to rate each item's relevance, clarity, and accuracy on a 4-point scale.

Step 3: Calculate the Content Validity Index (CVI). Average the ratings. Items with CVI below 0.80 must be revised or removed.

Step 4: Pilot test with 20–30 respondents from the same population but not in your main study. Compute Cronbach's Alpha. Report this in your instrument development section.

Step 5: Revise any item whose removal increases overall alpha (SPSS "Alpha if item deleted" output).

08
Application

Writing Your Chapter 3

Every decision you've learned in this module must be written up clearly, justified with citations, and presented in a logical sequence.

Chapter 3 is the most technically demanding chapter of your dissertation. Every methodological decision must be: (1) named — what did you use; (2) described — how does it work; (3) justified — why is it the best choice for your specific research problem; and (4) cited — attributed to the methodologists who established the approach. A Chapter 3 without citations is a weak Chapter 3. Creswell, 2014; Fraenkel et al., 2019

Standard Chapter 3 Structure

  • 3.1 Research Design — State and justify your overall research design (qualitative, quantitative, mixed). Name the specific design (e.g., descriptive-correlational, phenomenological). Cite foundational authors (Creswell, Bryman, etc.).
  • 3.2 Research Locale — Describe where the study was conducted and why this location was chosen. Provide brief institutional context relevant to the research problem.
  • 3.3 Population, Sample, and Sampling Technique — State total population, justify your sampling technique, show your sample size calculation (or saturation rationale), describe inclusion/exclusion criteria.
  • 3.4 Research Instrument — Describe the instrument (adapted or researcher-made), present its structure (number of items, scale used, subscales), explain the validation process, and report Cronbach's Alpha.
  • 3.5 Data Collection Procedure — Step-by-step account of exactly how data were collected. Include timeline, permissions sought, how respondents were approached, and how data were recorded/stored.
  • 3.6 Statistical Treatment of Data — List every statistical test used and explain what research question it answers. State your alpha level. Cite statistical references (Field, 2018; Fraenkel et al., 2019).
  • 3.7 Ethical Considerations — Describe how you obtained informed consent, maintained confidentiality, protected participant data, and addressed any potential harm. Reference your institutional ethics approval if applicable.
TEMPLATE PHRASE — RESEARCH DESIGN JUSTIFICATION

"This study employed a [descriptive/correlational/experimental/phenomenological] research design. According to [author, year], this design is appropriate when the researcher seeks to [describe/test/explore/understand] [specific purpose]. Given that this study aims to [your purpose], this design provides the most suitable framework for addressing the stated research questions."

Common Chapter 3 Errors Made by Graduate Students

❌ Error 1: No justification

"A survey questionnaire was used to collect data from the respondents."

✓ Correct:

"A researcher-made questionnaire was used as the primary data collection instrument. This instrument was selected because surveys allow systematic, efficient collection of data from large samples and are well-suited to measuring self-reported attitudes and behaviors (Fowler, 2014; Dillman et al., 2014)."

❌ Error 2: No citation for sampling

"Stratified random sampling was used in this study. The respondents were divided into three groups."

✓ Correct:

"Stratified random sampling was employed in this study. According to Fraenkel et al. (2019), this technique ensures proportional representation of all subgroups within a population, making it particularly appropriate when comparisons across defined strata are central to the research objectives."

❌ Error 3: Reporting α without context

"The Cronbach's Alpha was 0.87."

✓ Correct:

"The instrument demonstrated high internal consistency with a Cronbach's Alpha of 0.87, exceeding the 0.80 threshold recommended by Nunnally (1978) for research instruments used in social science research."

❌ Error 4: Future tense in Chapter 3

"The researcher will distribute questionnaires to the respondents."

✓ Correct (in final thesis):

"The researcher distributed the questionnaires to the respondents during [month/year]." (Use past tense in the completed thesis; present or future in proposals.)

Module Self-Assessment
HOW CONFIDENT ARE YOU? · RATE YOURSELF
I can explain the difference between qualitative, quantitative, and mixed methods research and choose appropriately for a given research question.
Not yet Fully confident
I can select the appropriate sampling technique for a given study and calculate the minimum sample size using Slovin's Formula.
Not yet Fully confident
I can explain the difference between validity and reliability and describe at least three strategies for establishing each in my study.
Not yet Fully confident
I can write a well-justified, properly cited methodology chapter that clearly explains and defends every methodological decision in my study.
Not yet Fully confident

REFERENCES — APA 7TH EDITION
  • Angrosino, M. (2007). Doing ethnographic and observational research. SAGE Publications.
  • Babbie, E. (2020). The practice of social research (15th ed.). Cengage Learning.
  • Bowen, G. A. (2009). Document analysis as a qualitative research method. Qualitative Research Journal, 9(2), 27–40. https://doi.org/10.3316/QRJ0902027
  • Charmaz, K. (2014). Constructing grounded theory (2nd ed.). SAGE Publications.
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Erlbaum Associates.
  • Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed methods approaches (4th ed.). SAGE Publications.
  • Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). SAGE Publications.
  • Creswell, J. W., & Plano Clark, V. L. (2018). Designing and conducting mixed methods research (3rd ed.). SAGE Publications.
  • Denzin, N. K., & Lincoln, Y. S. (Eds.). (2018). The SAGE handbook of qualitative research (5th ed.). SAGE Publications.
  • Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: The tailored design method (4th ed.). Wiley.
  • Field, A. (2018). Discovering statistics using IBM SPSS statistics (5th ed.). SAGE Publications.
  • Fowler, F. J. (2014). Survey research methods (5th ed.). SAGE Publications.
  • Fraenkel, J. R., Wallen, N. E., & Hyun, H. H. (2019). How to design and evaluate research in education (10th ed.). McGraw-Hill.
  • Johnson, R. B., Onwuegbuzie, A. J., & Turner, L. A. (2007). Toward a definition of mixed methods research. Journal of Mixed Methods Research, 1(2), 112–133. https://doi.org/10.1177/1558689806298224
  • Kvale, S., & Brinkmann, S. (2015). InterViews: Learning the craft of qualitative research interviewing (3rd ed.). SAGE Publications.
  • Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. SAGE Publications.
  • Moustakas, C. (1994). Phenomenological research methods. SAGE Publications.
  • Norman, G. (2010). Likert scales, levels of measurement and the "laws" of statistics. Advances in Health Sciences Education, 15(5), 625–632. https://doi.org/10.1007/s10459-010-9222-y
  • Nunnally, J. C. (1978). Psychometric theory (2nd ed.). McGraw-Hill.
  • Patton, M. Q. (2015). Qualitative research & evaluation methods (4th ed.). SAGE Publications.
  • Ragin, C. C. (2008). Redesigning social inquiry: Fuzzy sets and beyond. University of Chicago Press.
  • Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin.
  • Strauss, A., & Corbin, J. (1998). Basics of qualitative research: Techniques and procedures for developing grounded theory (2nd ed.). SAGE Publications.
  • Sullivan, G. M., & Artino, A. R. (2013). Analyzing and interpreting data from Likert-type scales. Journal of Graduate Medical Education, 5(4), 541–542. https://doi.org/10.4300/JGME-5-4-18
  • Yin, R. K. (2018). Case study research and applications: Design and methods (6th ed.). SAGE Publications.