Introduction
A sample size calculator helps you figure out how many people or items you need to study to get results you can trust. In statistics, you can't always survey or test everyone in a large group, so you pick a smaller piece called a sample. But if your sample is too small, your results might not be accurate. If it's too big, you waste time and money. This tool does the math for you — just enter your confidence level, margin of error, and population size, and it will tell you the right number of people to include in your study. Whether you're working on a school project, a science experiment, or a survey, knowing the correct sample size is one of the most important steps in getting reliable data.
How to Use Our Sample Size Calculator
Enter a few details about your study below, and the calculator will tell you how many people you need to survey to get reliable results.
Population Size: Type in the total number of people in the group you want to study. For example, if you want to learn about a school with 500 students, enter 500. If your population is very large or unknown, you can enter a large number like 100,000 or more.
Confidence Level: Pick how sure you want to be that your results are correct. A 95% confidence level is the most common choice. This means if you ran the same survey 100 times, about 95 of them would give you similar results. Common options are 90%, 95%, and 99%. You can use our Confidence Interval Calculator to explore how different confidence levels affect the range of your estimates.
Margin of Error: Enter the amount of error you are okay with, shown as a percentage. A smaller margin of error means your results will be more exact, but you will need to survey more people. A margin of error of 5% is a common starting point.
Population Proportion: Enter your best guess for the percentage of people who will pick a certain answer. If you are not sure, use 50%, which gives you the largest sample size and the safest estimate. This value should be between 1% and 99%. Our Percentage Calculator can help you convert between fractions and percentages if needed.
Sample Size (Result): After you fill in the fields above, the calculator will show you the minimum number of people you need to include in your survey. This number ensures your results meet the confidence level and margin of error you chose.
What Is Sample Size and Why Does It Matter?
Sample size is the number of people, items, or observations you include in a study or survey. Picking the right sample size is one of the most important steps in any research project. If your sample is too small, your results may be unreliable or miss a real effect. If your sample is too large, you waste time and money collecting more data than you need.
Key Concepts Behind Sample Size Calculations
Confidence Level
The confidence level tells you how sure you want to be that your results reflect the truth. A 95% confidence level means that if you repeated the same study 100 times, about 95 of those studies would capture the true value. Common choices are 90%, 95%, and 99%. A higher confidence level requires a larger sample size. For a deeper look at how confidence levels translate into actual intervals, try our Confidence Interval Calculator.
Margin of Error
The margin of error is the range within which the true answer likely falls. For example, if a poll says 60% of people prefer brand A with a margin of error of ±5%, the true value is probably between 55% and 65%. A smaller margin of error gives you more precise results but requires more people in your sample. You can use our Percent Error Calculator to understand how observed values compare to expected values in your analysis.
Statistical Power
Power is the chance that your study will detect a real difference or effect when one actually exists. A power of 80% means there is an 80% chance you will find a real effect if it is there. The standard recommendation is 80% or 90% power. Lower power means you risk missing real findings, which is called a Type II error.
Significance Level (Alpha)
The significance level, often written as α, is the probability of concluding there is an effect when there really is not one. This mistake is called a Type I error or a false positive. The most common choice is α = 0.05, which means you accept a 5% chance of a false positive. A stricter (smaller) alpha requires a larger sample. Our p Value Calculator can help you interpret the significance of your test results once your study is complete.
Effect Size
Effect size measures how big the difference or relationship is that you expect to find. A large effect is easier to detect and needs fewer subjects. A small effect is harder to spot and needs many more subjects. Common measures include Cohen's d for comparing means (small = 0.2, medium = 0.5, large = 0.8) and Cohen's f for ANOVA (small = 0.10, medium = 0.25, large = 0.40). Computing effect size often involves the Standard Deviation Calculator, since standard deviation is a core component of most effect size formulas.
Common Study Designs
Survey or Proportion Estimation
This is the most common scenario. You want to estimate what percentage of a group has a certain opinion, trait, or behavior. The formula uses your expected proportion, desired margin of error, and confidence level. When your population is small and known (for example, 5,000 employees in a company), a finite population correction reduces the required sample size because you are surveying a larger share of the total group.
One Group vs. a Known Value
This design compares a single group's average or proportion to a known reference number. For example, you might test whether the average test score at your school differs from the national average. This uses a one-sample t-test for means or a one-sample proportion test for binary outcomes. Understanding where your data falls relative to the population average often involves Z-scores, which measure how many standard deviations a value is from the mean.
Two Independent Groups
This is used when you compare two separate groups, such as a treatment group and a control group. The two-sample t-test compares means, and the Chi-Square test compares proportions. An allocation ratio lets you assign unequal numbers to each group, such as putting twice as many people in the treatment group.
Paired or Matched Design
In a paired design, you measure the same subjects twice — for example, before and after a treatment. Because each person serves as their own control, paired designs are often more efficient and need fewer subjects than two-group designs. The paired t-test handles continuous outcomes, and McNemar's test handles binary outcomes. Analyzing the differences between paired measurements often benefits from tools like the Mean Median Mode Calculator to summarize the distribution of those differences.
The Basic Sample Size Formula for Surveys
The most widely used formula for estimating a proportion is:
n = (Z² × p × (1 − p)) / E²
Where n is the sample size, Z is the Z-score for your confidence level (1.96 for 95%), p is the expected proportion (use 0.50 if unknown, as this gives the largest sample), and E is the margin of error. For a finite population of size N, apply the correction: n_adjusted = n / (1 + (n − 1) / N). If you need to look up or verify a Z-score for a specific confidence level, our Z Score Calculator makes it easy.
Tips for Choosing Your Inputs
- When you don't know the expected proportion, use 50%. This is the most conservative choice and produces the largest (safest) sample size.
- Always plan for dropouts. If you expect 10% of participants to drop out, increase your calculated sample size by about 10% to compensate. A Percent Change Calculator can help you quickly determine the adjusted number after accounting for expected attrition.
- Use a one-sided test only when you are certain the effect can only go in one direction. In most cases, a two-sided test is safer and more widely accepted.
- Report your assumptions. Always state the power, alpha, effect size, and any other values you used so others can evaluate and reproduce your calculation.
- Check for correlation in your data. If your study design involves examining relationships between variables, the Correlation Coefficient Calculator can help you estimate effect sizes before running your power analysis.