Skip to main content
Back to Blog

7 Test Design Techniques Every QA Engineer Should Know (With Examples)

By Aston Cook7 min read
test design techniquesequivalence partitioningboundary value analysisdecision table testingstate transition testingqa interview test design

Test design techniques are how experienced QA engineers go from "I'll try some stuff" to "I know exactly which test cases will find bugs efficiently." They show up in interviews constantly — especially equivalence partitioning and boundary value analysis — but they're also what separates thorough testers from ones who just run happy paths and hope for the best.

Here are seven techniques you should know cold, with concrete examples for each.

1. Equivalence Partitioning

Equivalence partitioning divides input data into groups (partitions) where every value in the group should be treated the same way by the system. Instead of testing every possible input, you test one representative value from each partition.

Example: A form field accepts ages 18–65 for insurance eligibility.

  • Valid partition: 18–65 (test with 30)
  • Invalid partition below: 0–17 (test with 10)
  • Invalid partition above: 66+ (test with 80)
  • Invalid partition type: non-numeric input (test with "abc")

Three to four test cases instead of testing every number from 0 to 200. That's the efficiency gain.

Interview tip: When an interviewer asks about this technique, always explain why it works — values in the same partition trigger the same code paths, so testing one is logically equivalent to testing all of them.

2. Boundary Value Analysis

Boundary value analysis focuses on the edges of equivalence partitions, because that's where bugs hide. Off-by-one errors, incorrect comparison operators (< vs <=), and rounding issues all cluster at boundaries.

Example: Using the same age field (valid range: 18–65):

  • Test with 17 (just below lower boundary — should reject)
  • Test with 18 (lower boundary — should accept)
  • Test with 65 (upper boundary — should accept)
  • Test with 66 (just above upper boundary — should reject)

Why it matters: A developer who writes if (age > 18) instead of if (age >= 18) won't be caught by testing age 30. But testing exactly 18 catches it immediately. Boundaries are where the logic changes, so that's where you test.

Interview tip: Pair this with equivalence partitioning in your answer. They're complementary — partitioning tells you which groups to test, BVA tells you which values within those groups matter most.

3. Decision Table Testing

Decision tables work for features where combinations of conditions produce different outcomes. You list every condition, map out the possible combinations, and identify the expected result for each.

Example: A shipping calculator with two conditions:

Order over $50?Member?Shipping
YesYesFree
YesNo$5
NoYes$5
NoNo$10

Four combinations, four test cases. Without the table, it's easy to miss one of these scenarios — particularly the "No + Yes" case that teams overlook because it's not the happy path or the obvious error path.

When to use it: Any time you have 2–4 boolean conditions that interact. Beyond 4 conditions, the table gets unwieldy and you should consider pairwise testing instead.

4. State Transition Testing

State transition testing maps how a system moves between states based on events. It's essential for anything with a lifecycle: orders, user accounts, subscriptions, workflows.

Example: An order lifecycle:

Created → [payment received] → Paid → [shipped] → In Transit → [delivered] → Completed
Created → [payment failed] → Failed
Paid → [cancelled] → Cancelled → [refund issued] → Refunded

Your test cases cover:

  • Valid transitions: Each arrow in the diagram is a test case
  • Invalid transitions: What happens if you try to ship an order that hasn't been paid? Cancel a completed order? Deliver a cancelled order?

Invalid transitions are where bugs and security issues hide. A system that lets you skip from "Created" to "Completed" without payment is a business-critical defect.

Interview tip: Draw the state diagram when answering. It shows structured thinking and makes your answer visual, which interviewers remember.

5. Pairwise Testing (All-Pairs)

Pairwise testing generates the smallest set of test cases that covers every combination of any two parameters. It's the practical answer to "we have too many combinations to test them all."

Example: A search form with three dropdowns:

  • Category: Electronics, Clothing, Books
  • Sort by: Price, Rating, Newest
  • Filter: In Stock, All

Full combination: 3 × 3 × 2 = 18 test cases. Pairwise reduces this to around 9 cases while still covering every pair of values at least once.

Why it works: Research consistently shows that most bugs are triggered by the interaction of one or two parameters, not three or more simultaneously. Pairwise gives you strong coverage with a fraction of the test cases.

Practical note: Use a tool to generate pairwise combinations (like PICT from Microsoft). Don't do this by hand — it's error-prone and defeats the purpose of the technique.

6. Error Guessing

Error guessing is the technique nobody teaches formally but every experienced tester uses. You deliberately test scenarios that your experience tells you are likely to fail: null values, empty strings, special characters, extremely long inputs, zero, negative numbers, concurrent users, and timezone boundaries.

Example: Testing a username field — an experienced tester immediately tries:

  • Empty string
  • Single character
  • 256+ characters
  • Special characters (<script>alert('xss')</script>)
  • Unicode (emoji, Chinese characters, Arabic text)
  • Leading/trailing spaces
  • SQL injection patterns ('; DROP TABLE users;--)

You won't find these cases through equivalence partitioning alone. Error guessing comes from pattern recognition — knowing where bugs tend to live based on what you've seen before.

Interview tip: Frame this as a complement to systematic techniques, not a replacement. "After I apply structured techniques like EP and BVA, I add cases based on common failure patterns I've seen in production" — that shows maturity.

7. Exploratory Testing

Exploratory testing is simultaneous learning, test design, and test execution. You don't write test cases first — you investigate the system with a specific mission and adapt your approach based on what you discover.

Structure it with charters:

  • "Explore the checkout flow with focus on payment edge cases"
  • "Investigate how the system handles session timeout during multi-step forms"
  • "Test the search feature with various special characters and encoding"

Each charter gets a timebox (typically 30–60 minutes), and you document what you tested, what you found, and what areas need deeper investigation.

When to use it: Early in a feature's lifecycle when requirements are still shifting, after a major refactor when you don't trust the existing test suite, or when you want to find the bugs that scripted tests miss because nobody thought to write a test case for that scenario.

Interview tip: Don't describe exploratory testing as "testing without a plan." It's structured investigation with a clear scope and timeboxed sessions. That distinction matters to interviewers.

---

From Theory to Interview Confidence

Knowing these techniques is step one. Explaining them clearly under interview pressure — with examples, trade-offs, and practical context — is what actually gets you the job.

Want to practice answering test design questions with AI that gives you real feedback? Try AssertHired free — 1 mock interview per month. 7-day free trial on paid plans.

Related Interview Prep