Updated on April 18th, 2026

Sample Size Calculator

Created By Jehan Wadia

Step 1: What do you want to calculate?
Calculate Sample Size

Determine how many subjects you need given power and confidence parameters.

Calculate Power

Find the statistical power achievable with a given sample size.

Calculate Margin of Error

Determine the margin of error for a given sample size and confidence level.

Step 2: Study Design / Context
Survey / Proportion Estimation

For surveys, polls, or single-proportion estimation.

Example: "What percentage of customers prefer brand A?" with optional finite population correction.
One Group vs. Known Population Value

Compare one study cohort to a known/published reference value.

Example: "Is the mean blood pressure in our clinic different from the national average of 120 mmHg?"
Two Independent Groups

Two groups each receiving different treatments or exposures.

Example: "Comparing a new drug vs. placebo for blood pressure reduction."
Paired / Matched Design

Paired observations (e.g., before/after on the same subjects).

Example: "Measuring patients' pain scores before and after a new intervention."
Step 3: Primary Endpoint Type
Dichotomous (Binary)

Two possible outcomes (e.g., success/failure).

The primary endpoint is binomial — only two possible outcomes. Example: mortality (dead/not dead), treatment response (responder/non-responder).
Continuous (Means)

Numerical measurement endpoint (e.g., blood pressure, weight).

The primary endpoint is a numerical average. Example: blood pressure reduction (mmHg), hospital length of stay (days).
Advanced Test Selection
Parameters


Introduction

A sample size calculator helps you figure out how many people or items you need to study to get results you can trust. In statistics, you can't always survey or test everyone in a large group, so you pick a smaller piece called a sample. But if your sample is too small, your results might not be accurate. If it's too big, you waste time and money. This tool does the math for you — just enter your confidence level, margin of error, and population size, and it will tell you the right number of people to include in your study. Whether you're working on a school project, a science experiment, or a survey, knowing the correct sample size is one of the most important steps in getting reliable data.

How to Use Our Sample Size Calculator

Enter a few details about your study below, and the calculator will tell you how many people you need to survey to get reliable results.

Population Size: Type in the total number of people in the group you want to study. For example, if you want to learn about a school with 500 students, enter 500. If your population is very large or unknown, you can enter a large number like 100,000 or more.

Confidence Level: Pick how sure you want to be that your results are correct. A 95% confidence level is the most common choice. This means if you ran the same survey 100 times, about 95 of them would give you similar results. Common options are 90%, 95%, and 99%. You can use our Confidence Interval Calculator to explore how different confidence levels affect the range of your estimates.

Margin of Error: Enter the amount of error you are okay with, shown as a percentage. A smaller margin of error means your results will be more exact, but you will need to survey more people. A margin of error of 5% is a common starting point.

Population Proportion: Enter your best guess for the percentage of people who will pick a certain answer. If you are not sure, use 50%, which gives you the largest sample size and the safest estimate. This value should be between 1% and 99%. Our Percentage Calculator can help you convert between fractions and percentages if needed.

Sample Size (Result): After you fill in the fields above, the calculator will show you the minimum number of people you need to include in your survey. This number ensures your results meet the confidence level and margin of error you chose.

What Is Sample Size and Why Does It Matter?

Sample size is the number of people, items, or observations you include in a study or survey. Picking the right sample size is one of the most important steps in any research project. If your sample is too small, your results may be unreliable or miss a real effect. If your sample is too large, you waste time and money collecting more data than you need.

Key Concepts Behind Sample Size Calculations

Confidence Level

The confidence level tells you how sure you want to be that your results reflect the truth. A 95% confidence level means that if you repeated the same study 100 times, about 95 of those studies would capture the true value. Common choices are 90%, 95%, and 99%. A higher confidence level requires a larger sample size. For a deeper look at how confidence levels translate into actual intervals, try our Confidence Interval Calculator.

Margin of Error

The margin of error is the range within which the true answer likely falls. For example, if a poll says 60% of people prefer brand A with a margin of error of ±5%, the true value is probably between 55% and 65%. A smaller margin of error gives you more precise results but requires more people in your sample. You can use our Percent Error Calculator to understand how observed values compare to expected values in your analysis.

Statistical Power

Power is the chance that your study will detect a real difference or effect when one actually exists. A power of 80% means there is an 80% chance you will find a real effect if it is there. The standard recommendation is 80% or 90% power. Lower power means you risk missing real findings, which is called a Type II error.

Significance Level (Alpha)

The significance level, often written as α, is the probability of concluding there is an effect when there really is not one. This mistake is called a Type I error or a false positive. The most common choice is α = 0.05, which means you accept a 5% chance of a false positive. A stricter (smaller) alpha requires a larger sample. Our p Value Calculator can help you interpret the significance of your test results once your study is complete.

Effect Size

Effect size measures how big the difference or relationship is that you expect to find. A large effect is easier to detect and needs fewer subjects. A small effect is harder to spot and needs many more subjects. Common measures include Cohen's d for comparing means (small = 0.2, medium = 0.5, large = 0.8) and Cohen's f for ANOVA (small = 0.10, medium = 0.25, large = 0.40). Computing effect size often involves the Standard Deviation Calculator, since standard deviation is a core component of most effect size formulas.

Common Study Designs

Survey or Proportion Estimation

This is the most common scenario. You want to estimate what percentage of a group has a certain opinion, trait, or behavior. The formula uses your expected proportion, desired margin of error, and confidence level. When your population is small and known (for example, 5,000 employees in a company), a finite population correction reduces the required sample size because you are surveying a larger share of the total group.

One Group vs. a Known Value

This design compares a single group's average or proportion to a known reference number. For example, you might test whether the average test score at your school differs from the national average. This uses a one-sample t-test for means or a one-sample proportion test for binary outcomes. Understanding where your data falls relative to the population average often involves Z-scores, which measure how many standard deviations a value is from the mean.

Two Independent Groups

This is used when you compare two separate groups, such as a treatment group and a control group. The two-sample t-test compares means, and the Chi-Square test compares proportions. An allocation ratio lets you assign unequal numbers to each group, such as putting twice as many people in the treatment group.

Paired or Matched Design

In a paired design, you measure the same subjects twice — for example, before and after a treatment. Because each person serves as their own control, paired designs are often more efficient and need fewer subjects than two-group designs. The paired t-test handles continuous outcomes, and McNemar's test handles binary outcomes. Analyzing the differences between paired measurements often benefits from tools like the Mean Median Mode Calculator to summarize the distribution of those differences.

The Basic Sample Size Formula for Surveys

The most widely used formula for estimating a proportion is:

n = (Z² × p × (1 − p)) / E²

Where n is the sample size, Z is the Z-score for your confidence level (1.96 for 95%), p is the expected proportion (use 0.50 if unknown, as this gives the largest sample), and E is the margin of error. For a finite population of size N, apply the correction: n_adjusted = n / (1 + (n − 1) / N). If you need to look up or verify a Z-score for a specific confidence level, our Z Score Calculator makes it easy.

Tips for Choosing Your Inputs


Frequently Asked Questions

What is a good sample size for a survey?

It depends on your population size, confidence level, and margin of error. For most surveys with a large population, a 95% confidence level, and a 5% margin of error, you need about 385 people. If you want more precise results (like a 2% margin of error), you will need around 2,401 people. Use the calculator above to find the exact number for your situation.

What happens if my sample size is too small?

If your sample size is too small, your results become unreliable. You may miss real effects or differences (called a Type II error), and your margin of error will be large. This means you cannot be confident that your findings reflect what is truly happening in the full population.

What is the difference between one-sided and two-sided tests?

A two-sided test checks if there is a difference in either direction (higher or lower). A one-sided test only checks one direction (for example, only whether a new drug is better, not worse). Two-sided tests are more common and require a slightly larger sample size. Use a one-sided test only when you are certain the effect can only go one way.

What is an allocation ratio?

The allocation ratio is the number of people in one group divided by the number in the other group. An allocation ratio of 1 means both groups are the same size. A ratio of 2 means the second group has twice as many people as the first. Unequal allocation is sometimes used when one treatment is cheaper or when more data is needed for a specific group.

Why does using 50% for the expected proportion give the largest sample size?

The formula multiplies p × (1 − p), and this product is at its maximum when p = 0.50. At 50%, there is the most uncertainty about the outcome, so you need the most data to get a precise answer. If you know the true proportion is closer to 10% or 90%, the required sample size drops significantly.

What is finite population correction?

Finite population correction (FPC) reduces the required sample size when your population is small and known. If you are surveying a large share of the total population, you do not need as many responses to get accurate results. For example, if your population is 500, the calculator adjusts the sample size downward compared to an unlimited population. Enter your population size in the optional field to apply this correction.

What is the difference between confidence level and statistical power?

Confidence level controls how sure you are that your interval contains the true value. It relates to avoiding false positives (Type I error). Statistical power is the chance of detecting a real effect when one exists. It relates to avoiding false negatives (Type II error). Both affect sample size — higher values for either one require more people.

When should I use the paired design option?

Use the paired design when you measure the same subjects twice, such as before and after a treatment. It also applies to matched pairs, like twins assigned to different groups. Paired designs are more efficient because each person acts as their own control, which reduces variability and typically requires fewer subjects than two independent groups.

What is Cohen's d and how do I choose an effect size?

Cohen's d measures the difference between two group means in terms of standard deviations. The standard benchmarks are: 0.2 = small effect, 0.5 = medium effect, and 0.8 = large effect. If you have pilot data or previous studies, use those to estimate your effect size. If not, a medium effect (0.5) is a common starting point. Smaller effects need larger sample sizes to detect.

What is a non-inferiority trial and when do I use it?

A non-inferiority trial tests whether a new treatment is not worse than an existing treatment by more than a set margin. You use it when the new treatment has other advantages (like fewer side effects or lower cost) and you want to show it performs nearly as well. You set a non-inferiority margin, and the calculator determines how many people you need to prove the new treatment is acceptable.

How does the number of groups affect sample size in ANOVA?

In ANOVA, more groups require a larger total sample size because you need enough people in each group to detect differences. The sample size per group depends on the number of groups, the effect size (Cohen's f), your significance level, and your desired power. For example, comparing 5 groups needs more total subjects than comparing 3 groups at the same power level.

Can I use this calculator for medical or clinical research?

Yes. This calculator supports study designs commonly used in clinical research, including two-group comparisons, paired designs, survival analysis (log-rank test), equivalence trials, and non-inferiority trials. However, for regulatory submissions, you should confirm your calculations with a biostatistician and use validated software as required by your institution or ethics board.

What is the log-rank test used for?

The log-rank test compares time-to-event (survival) data between two groups. It is commonly used in clinical trials to see if one treatment helps patients survive longer or stay disease-free longer than another. The calculator needs the expected hazard ratio, the probability of observing the event, and your desired power to estimate the required sample size.

What does the sensitivity chart show?

The sensitivity chart shows how your results change when one input varies. For example, in a survey calculation, it shows how sample size changes across different margins of error. This helps you see the trade-offs — like how much larger your sample needs to be if you want a 2% margin of error instead of 5%.

What is an equivalence trial?

An equivalence trial tests whether two treatments produce results that are close enough to be considered the same. You set an equivalence margin (the largest acceptable difference), and the study checks if the true difference falls within that margin. This design uses the TOST (Two One-Sided Tests) method and typically needs a larger sample size than a standard superiority trial.


Related Calculators

Percent Error Calculator

Visit Percent Error Calculator

Percent Change Calculator

Visit Percent Change Calculator

Percentage Calculator

Visit Percentage Calculator

IQR Calculator

Visit IQR Calculator

Z Score Calculator

Visit Z Score Calculator

Standard Deviation Calculator

Visit Standard Deviation Calculator

Mean Median Mode Calculator

Visit Mean Median Mode Calculator

Correlation Coefficient Calculator

Visit Correlation Coefficient Calculator

p Value Calculator

Visit p Value Calculator

Chi Square Calculator

Visit Chi Square Calculator

Confidence Interval Calculator

Visit Confidence Interval Calculator