Survey Parameters
Set your confidence level, margin of error, and population details
How certain you want to be
Acceptable deviation range
Expected proportion (50% = most conservative)
Leave 0 for infinite population
What This Calculator Does
This sample size calculator determines how many observations, survey responses, or experimental subjects you need for your results to be statistically meaningful. It handles two core use cases: proportion-based studies where you measure percentages, and mean-based studies where you measure averages. Both modes support finite population correction ā so if your total population is 500 employees rather than 500,000 city residents, the required sample shrinks proportionally.
Enter your population size, confidence level, and margin of error, and get a statistically valid sample size instantly. No statistics degree required, no guessing, just a clear number you can use right away.
Surprising fact: Whether you are surveying 1,000 people in your town or 300 million people in the entire United States, you only need about 384 responses to get a result that is accurate to within plus or minus 5 percentage points, 95% of the time. The population size barely matters once it gets above a few thousand.
How to Use the Calculator
Step 1 ā Enter your population size. This is the total number of people (or units) in the group you are studying. If your population is very large or unknown, leave it blank ā the calculator assumes an effectively infinite population and returns a conservative estimate.
Step 2 ā Set your confidence level. The standard choice is 95%, meaning that if you repeated this study 100 times with different random samples, approximately 95 of those samples would produce results within the margin of error you specified. Common alternatives are 90% (faster, less precise) and 99% (slower, more precise).
Step 3 ā Set your margin of error. This is the maximum acceptable difference between your sample result and the true population value. A 5% margin of error on a survey showing 60% approval means the real number is between 55% and 65%. Tighter margins require larger samples.
Step 4 ā Enter your expected proportion (optional). If you have a prior estimate of what the result will be, enter it. If you have no prior data, use 0.50, which produces the largest (most conservative) sample size.
Step 5 ā Read your result. The calculator returns the minimum number of completed responses you need. If you expect some non-responses, divide this number by your anticipated response rate to get the number of invitations to send.
The Sample Size Formula Explained
The standard formula for determining sample size for a proportion is: n = (Z² Ć p Ć (1 ā p)) / E²
Where n = required sample size, Z = z-score corresponding to your confidence level (1.645 for 90%, 1.96 for 95%, 2.576 for 99%), p = estimated proportion (use 0.5 if unknown), and E = margin of error expressed as a decimal (e.g., 0.05 for ±5%).
When the population is finite (and relatively small), a correction factor reduces the sample size: n_adjusted = n / (1 + ((n ā 1) / N)) where N is the total population size. This correction becomes negligible when N exceeds roughly 20,000 ā beyond that point, population size barely affects the required sample.
Worked Example: Survey of 10,000 Customers
You manage a customer database of 10,000 people and want to survey them about satisfaction with a new product feature. You want 95% confidence with ±4% margin of error. You have no prior data, so you use p = 0.50.
Uncorrected sample size: n = (1.96² Ć 0.50 Ć 0.50) / 0.04² = 0.9604 / 0.0016 = 600.25 ā round up to 601.
Finite population correction: n_adjusted = 601 / (1 + ((601 ā 1) / 10,000)) = 601 / 1.06 ā 567.
Result: You need 567 completed responses. If your expected response rate is 30%, send approximately 1,890 survey invitations.
Worked Example: Small Team Assessment
You run a 120-person company and want to measure employee engagement. You need 90% confidence with ±5% margin of error.
n = (1.645² Ć 0.50 Ć 0.50) / 0.05² = 270.6 ā round up to 271. n_adjusted = 271 / (1 + (270 / 120)) = 271 / 3.25 ā 83.
Result: You need 83 completed responses from your 120-person team. The finite population correction cut the requirement by more than two-thirds ā this is where it matters most.
When Does Population Size Actually Matter?
A common misconception is that sample size must be proportional to population size. In practice, the finite population correction only makes a meaningful difference when the sample is a large fraction of the population (roughly more than 5%).
For 95% confidence with ±5% margin of error (p = 0.50): a population of 100 requires 80 responses; 500 requires 217; 1,000 requires 278; 10,000 requires 370; 100,000 requires 383; 1,000,000 requires 384; 10,000,000 requires 384. Notice how the number plateaus. Once the population exceeds a few thousand, adding more people to the population barely changes the sample size. The real drivers are confidence level and margin of error ā not population magnitude.
How Confidence Level and Margin of Error Trade Off
Tightening either parameter inflates sample size ā sometimes dramatically. At 90% confidence with 10% margin of error, you need only 68 respondents. At 95% confidence with 5%, you need 385. At 99% confidence with 1%, you need 16,590. Moving from 5% to 1% margin of error at 95% confidence requires 25Ć more respondents. Before demanding razor-thin precision, ask whether your decision actually changes at ±5% versus ±1%.
Sample Size for A/B Testing ā A Different Formula
The calculator above is designed for descriptive surveys. A/B testing ā where you compare two variants and want to detect a statistically significant difference ā uses a different formula entirely. For A/B tests, the key inputs are baseline conversion rate, minimum detectable effect (MDE), statistical power (typically 80%), and significance level (typically 5%). A/B test sample sizes scale inversely with the square of the MDE ā detecting a 1% relative change requires 100Ć more traffic than detecting a 10% change.
Common Mistakes to Avoid
Ignoring non-response bias. The calculator tells you how many completed responses you need. If only 25% of people you contact actually respond, your required outreach is 4Ć the sample size. Worse, non-respondents are often systematically different from respondents.
Using 0.50 when you have better data. The proportion p = 0.50 gives the maximum sample size because variance peaks at that value. If a previous study showed 15% or 85%, using 0.50 overestimates your need by 40ā70%. Use real data when you have it.
Confusing confidence level with accuracy. A 95% confidence level does not mean your result is 95% accurate. It means that 95% of similarly constructed intervals would contain the true value.
Treating the formula as universal. The standard formula assumes simple random sampling. Stratified sampling, cluster sampling, and multi-stage designs all require adjustments ā typically a design effect multiplier.
Rounding down. Always round up. A calculation yielding 384.2 means you need 385, not 384.
Sample Size for Common Research Scenarios
National opinion polls typically need 1,068 respondents (95% / ±3%). Customer satisfaction surveys need 357ā381 (95% / ±5%). Employee engagement surveys need 73ā357 (90% / ±5%). Academic research studies need approximately 385 for infinite populations (95% / ±5%). Website A/B tests per variant need 3,000ā30,000+ (80% power / 5% MDE). Clinical trials per arm need 50ā500+. Quality control inspections need 2,401 (95% / ±2%).
How Response Rate Affects Planning
Your calculated sample size is useless without a realistic response rate assumption. In-person intercept surveys get 70ā80% response rates. Phone surveys (live caller) get 15ā25%. Email to existing customers gets 10ā30%. Email to cold lists gets 2ā5%. Social media ads get 0.5ā2%. Website pop-ups get 3ā8%. Plan for attrition too ā incomplete surveys where the respondent drops off partway further reduce your usable sample.
The Relationship Between Sample Size and Statistical Power
Statistical power is the probability that your study will detect a real effect when one exists. A study with 80% power has a 20% chance of missing a genuine difference ā what statisticians call a Type II error. Power depends on four factors: sample size (larger samples detect smaller effects), effect size (larger real differences are easier to detect), significance level (stricter thresholds require more data), and variability (higher variance requires more observations). For survey research, the sample size formula implicitly builds in power through the confidence level and margin of error. For experimental designs, power analysis is a separate, mandatory step.
When You Need a Statistician Instead of a Calculator
This calculator handles straightforward scenarios ā single-proportion surveys, infinite or finite populations, standard confidence levels. You should consult a statistician when your sampling design is complex (stratified, clustered, or multi-stage), you are comparing subgroups and need adequate power within each subgroup, your study involves repeated measures or longitudinal tracking, you are designing a clinical trial with regulatory requirements, your outcome variable is continuous with unknown or highly skewed distribution, or you need to account for covariates in your analysis. A calculator gives you a starting point. A statistician ensures your study design matches your research question.
Frequently Asked Questions
Enter your population size, desired confidence level (typically 95%), and acceptable margin of error (typically ±5%) into the calculator above. If you have a prior estimate of the expected proportion, enter it; otherwise, use 50% for the most conservative estimate. The calculator returns the minimum number of completed responses you need.
At 95% confidence with ±5% margin of error, you need 385 respondents for large populations. For smaller populations, the number decreases ā for example, 278 for a population of 1,000 and 80 for a population of 100.
The "n ā„ 30" rule of thumb comes from the Central Limit Theorem and applies to situations where you are estimating a population mean and the underlying distribution is roughly symmetric. For survey proportions, 30 is almost never sufficient ā you typically need several hundred respondents for reasonable precision.
There is no universal minimum. Statistical significance depends on the effect size you are trying to detect, the variance in your data, and the significance level you set. For surveys, the practical minimum is usually in the range of 100ā400 respondents, depending on precision requirements.
Substantially only when the population is small (under a few thousand). For populations above about 20,000, the required sample size is nearly identical regardless of whether the total population is 50,000 or 50 million.
For most business surveys with a standard 95% confidence / ±5% margin, target 385 completed responses. If you need results broken down by subgroup (e.g., by region, department, or demographic), you need approximately that many per subgroup.
Sample size is the number of usable, completed responses you need for your analysis. Response rate is the percentage of people you contact who actually complete the survey. If you need 400 responses and expect a 20% response rate, you must contact 2,000 people.
Try More SupaCalc Tools
Free calculators for finance, health, AI costs, and more.
Browse All Calculators