Manual Testing in 2026: Complete Guide + When to Automate

Manual testing is not going away. But the teams that do it well in 2026 look nothing like the teams that were doing it five years ago.
According to Katalon's 2025 State of Software Quality Report, which surveyed over 1,400 QA professionals across three continents, 82% of testers still use manual testing in their daily work. Not occasionally, not as a fallback — daily. At the same time, the Capgemini/OpenText World Quality Report 2025 found that 89% of organisations are exploring generative AI in their quality engineering workflows, but only 15% have scaled it enterprise-wide.
These two numbers are not contradictory. They describe the same reality: automation is real, AI is reshaping QA, and human testers are still the foundation of software quality for the overwhelming majority of teams shipping real products.
This guide covers what manual testing actually is, the types that matter, how to run it well, which tools support it, and — critically — how to know when you should automate instead.
Manual Testing vs Automated Testing at a Glance
Before going deeper, here's the essential distinction. Both approaches are covered in full later in this guide.
Manual Testing | Automated Testing | |
Best for | Exploratory, UX, accessibility, new features, novel bugs | Regression, smoke, performance, data-heavy, stable flows |
Speed | Slower at scale | Faster at scale |
Setup cost | Lower | Higher |
Long-term cost | Higher for repeated execution | Lower for repeated execution |
Maintenance | Adapts naturally when product changes | Requires updates when UI or logic changes |
Human judgment | Essential | Not available |
Bug types caught | Usability, UX, context-dependent, unexpected | Regression, configuration, API contract failures |
The short version: automation handles the repetitive and predictable. Manual testing handles the judgment-driven, exploratory, and context-dependent. Most teams that ship reliable software use both deliberately.
What Is Manual Testing?
Manual testing is the process of a human tester validating software without automation scripts by interacting with the application directly — executing test cases, observing behaviour, and comparing results against expected outcomes.
The tester clicks through flows, enters inputs, and evaluates what the software does against what it's supposed to do. No script does this for them.
The definition is simple. What makes manual testing hard is knowing where to apply human judgment, how to apply it systematically, and when to hand something off to automation instead.
Manual testing is not the opposite of quality. It's not what you do when you can't automate. At its best, it's the practice of deploying human curiosity, domain knowledge, and contextual judgment in places where those things are irreplaceable.
What manual testing is not
Manual testing is not randomly clicking around hoping to find bugs. It's not an excuse for not having automation. And it's not the same as a human running a scripted regression checklist — that kind of work is under genuine pressure from automation, and it should be. Repetitive, scripted execution is what automation is designed for. What it cannot replace is the human ability to think laterally, notice what feels wrong even when it's technically correct, and find the bug that nobody thought to look for.

Get the Mobile Testing Playbook Used by 800+ QA Teams
Discover 50+ battle-tested strategies to catch critical bugs before production and ship 5-star apps faster.
Types of Manual Testing
Manual testing is not one thing. It's a family of distinct practices, each suited to a different situation. Understanding which type to apply — and when — is the core competency of an effective QA engineer.
Quick reference — all 10 types at a glance:
Testing Type | Purpose | Best Used When |
Exploratory | Unscripted investigation; find what you didn't know to look for | New features, edge case discovery |
Ad Hoc | Informal break-testing; no documentation | Quick sanity checks, early-stage builds |
Usability | Evaluate UX, comprehension, intuitiveness | Pre-launch, new UI patterns, onboarding |
Smoke | Basic build validation — does it start? | Before formal test cycles, post-deploy |
Sanity | Spot-check a specific area after a fix | After bug fixes, targeted code changes |
Regression | Verify existing features still work after changes | Post-release, after merges, major updates |
Integration | Validate how components interact | Cross-service flows, API + UI alignment |
System | Full application against spec, pre-release | Pre-UAT, full environment validation |
UAT | Stakeholder sign-off against business needs | Pre-launch approval, client acceptance |
Accessibility | Validate usability for people with disabilities | Any consumer product, regulated industries |
1. Exploratory Testing
Exploratory testing is the most distinctly human form of testing. There is no predefined test script. The tester learns the application, designs tests, and executes them simultaneously — adapting based on what they discover. Think of it as structured investigation: you're not following a map, you're making one.
This is where novel bugs get found. If a regression suite misses a bug because nobody wrote a test for that scenario, exploratory testing is what catches it — because an experienced tester is actively looking for things that don't fit, feel wrong, or behave unexpectedly.
Best used for: New features, edge case discovery, post-release investigation, complex workflows.
Not suited for: Systematic regression coverage, high-frequency validation of stable flows.
2. Ad Hoc Testing
Ad hoc testing is informal, unstructured, and deliberately unpredictable. There are no test cases, no documentation, no reporting format. A tester sits down with the application and tries to break it any way they can think of.
This sounds chaotic because it is — intentionally. The randomness is the point. It finds bugs that structured testing misses because structured testing is, by definition, based on what someone already thought to check.
Best used for: Quick sanity checks, break-test sessions before exploratory testing begins, finding surface-level issues fast.
Not suited for: Coverage tracking, repeatability, regression.
3. Usability Testing
Usability testing evaluates whether the software is easy to use, intuitive to navigate, and comprehensible to a real person who isn't a developer or QA engineer. It's the practice of testing the experience, not just the function.
A login flow that works technically but uses opaque error messages, confusing field labels, or a non-standard recovery path will fail a usability test even if it passes every automated functional test. Automated testing validates what software does. Usability testing validates whether that behaviour makes sense to an actual human.
Best used for: New UI/UX patterns, onboarding flows, accessibility evaluation, pre-launch review with real users.
Not suited for: Regression testing, API validation, anything with a binary pass/fail based on technical spec.
4. Smoke Testing
A smoke test is the fastest, highest-level check: does the application start, load, and survive basic interaction without crashing? Before running a full test suite on a new build, smoke testing answers the "is this build worth testing?" question.
The name comes from hardware testing — if you power on a circuit board and smoke comes out, you don't need further investigation to know something is wrong.
Best used for: Validating new builds before formal test cycles, post-deployment sanity checks, CI pipeline gates before longer test runs trigger.
Not suited for: Deep functional coverage, edge case discovery.
5. Sanity Testing
Often confused with smoke testing, sanity testing is more focused. Where a smoke test checks the whole application superficially, a sanity test verifies a specific area of functionality following a bug fix or change. The question being answered is: does this particular thing work now, and did we break anything obvious around it?
Best used for: Quickly validating a developer's fix before full regression, spot-checking a specific component after a targeted code change.
Not suited for: Broad coverage, new feature validation.
6. Regression Testing
Regression testing verifies that existing functionality still works after new code has been added, bugs have been fixed, or other changes have been made. Its goal is to ensure that the new hasn't broken the old.
This is the type of testing most commonly discussed in the context of automation — because repetitive regression on stable flows is where automation saves the most time. But manual regression still matters for flows that are too unstable to automate, edge cases that no script was written to cover, and sanity checks that need human judgment to interpret correctly.
Best used for: Verifying stability after releases, major merges, dependency updates.
7. Integration Testing
Integration testing checks how different modules or services interact with each other. Where unit testing checks individual components in isolation, integration testing checks whether they work together correctly — data passes between services as expected, APIs return what the UI depends on, and combined behaviours match the system specification.
Manual integration testing involves a tester navigating flows that span multiple components and checking that handoffs between them produce correct results. API testing tools like Postman are commonly used here, even by manual testers with no automation background.
Best used for: End-to-end flow validation across services, verifying API contracts in conjunction with UI behaviour.
8. System Testing
System testing evaluates the entire application as a complete system — all components working together — against the formal specification and business requirements. This is typically performed after integration testing and before user acceptance testing, in an environment that mirrors production as closely as possible.
Best used for: Pre-release validation against requirements, verifying non-functional behaviour (performance, security, compatibility) alongside functional correctness.
9. User Acceptance Testing (UAT)
UAT is performed by end users or business stakeholders — not QA engineers — to verify that the software meets their actual needs. It's the final gate before release: does the product do what the business needs it to do, in the way that the people using it expect?
Best used for: Pre-launch approval by clients or product stakeholders, verifying business requirements against delivered software.
Not suited for: Technical validation, low-level functional checking.
10. Accessibility Testing
Accessibility testing verifies that software can be used by people with disabilities — including users who rely on screen readers, keyboard navigation, high-contrast modes, or assistive technology. This includes evaluating compliance with standards like WCAG 2.1 and Section 508.
Automated tools can catch some accessibility issues. They cannot tell you whether an application is actually usable by someone relying on a screen reader to complete a transaction. That judgment requires a human tester.
Best used for: Any consumer-facing product, regulated industries (healthcare, finance, government), products with explicit accessibility requirements.
The Manual Testing Process: Step by Step
Manual testing without structure is just clicking around. Structure is what makes it systematic, repeatable, and defensible.
Step 1: Requirements Analysis
Before a single test is designed, testers need to understand what the software is supposed to do. Review user stories, acceptance criteria, design specifications, and business requirements. The goal is to identify testable conditions — the specific behaviours that need to be verified.
This is also where ambiguities surface. A tester asking "what should happen when a user skips this required field?" before development begins costs five minutes. Catching that ambiguity after deployment costs significantly more.
Step 2: Test Planning
A test plan defines scope (what will be tested), approach (how it will be tested), resources (who and what), schedule, and exit criteria (what defines done). It's not a bureaucratic exercise — it's how QA, development, and product teams get aligned on what's being validated before anyone starts executing.
For most Agile teams, a test plan doesn't need to be a 40-page document. It needs to answer: what are we testing this sprint, how, and how do we know when we're done?
Step 3: Test Case Design
A test case describes exactly what to test: the preconditions (system state before the test), the steps (what the tester does), the input data (what values are used), and the expected result (what correct behaviour looks like). Good test cases are specific enough to be reproducible by anyone on the team, not just the person who wrote them.
Naming convention that helps: [Feature]_[Scenario]_[Expected Outcome]. Clear naming makes test suites scannable and maintenance faster.
Step 4: Test Environment Setup
The test environment needs to match the target environment as closely as possible. Bugs found in a configuration that doesn't reflect production tell you less than you need to know. For mobile testing, this means testing on real devices — not just emulators — since hardware-specific behaviour (memory pressure, OEM OS layers, network transitions) produces bugs that simulators will never show.
For mobile QA teams, Quash removes a significant amount of the friction here. Instead of manually configuring device connections, capturing logs separately, and then writing up reproduction steps after the fact, Quash captures full session context automatically during testing — device model, OS version, build, network state, logs, and screenshots — so the evidence for each bug is already attached by the time it's filed. This is particularly useful when testing across multiple devices, where tracking which device surfaced which bug is otherwise error-prone.
Step 5: Test Execution
The tester works through the test cases systematically, recording actual results against expected results. Any deviation is a potential defect. Testers should document what they test, not just what fails — test execution records are how you demonstrate coverage.
During execution, effective testers also think laterally: what hasn't been tested? What edge case isn't covered? This is where structured test execution bleeds into exploratory testing, and where the most valuable bugs often get found.
Step 6: Defect Reporting
A bug report that can't be reproduced is a bug that won't get fixed. Every defect should include: steps to reproduce (specific, numbered, starting from a defined state), the actual result, the expected result, environment details (device, OS version, app build), and severity/priority. Screenshots and screen recordings, where available, cut the time to reproduce by an order of magnitude.
Step 7: Regression and Sign-Off
After defects are fixed, re-test the reported issue (confirmation testing) and run a regression pass to verify that the fix didn't introduce new problems. Once the agreed exit criteria are met, the build is cleared for release.
Manual Testing Techniques
Beyond the types of manual testing, there are techniques — structured approaches to designing what to test.
Equivalence Partitioning divides input data into groups that the application should treat the same way. Instead of testing every possible input value, you test one from each partition. An age field that accepts 18–65 has at least three partitions: below 18, between 18–65, and above 65. You test one value from each.
Boundary Value Analysis focuses on the edges of valid input ranges, where bugs are disproportionately likely. For an age field accepting 18–65: test 17, 18, 65, and 66. Edge conditions are where off-by-one errors, validation failures, and boundary condition bugs live.
Decision Table Testing maps combinations of conditions to their corresponding actions. It's especially useful for complex business logic: if condition A and condition B are true but condition C is false, what happens? Decision tables make it hard to miss a combination.
State Transition Testing models software as a system that moves between states based on events. A login flow has states: not logged in, credentials entered, authentication in progress, logged in, session expired. State transition testing verifies that every valid transition works and every invalid transition is handled correctly.
Error Guessing uses experience and intuition to identify where bugs are likely to be. Where have bugs appeared before in this type of application? What inputs are commonly mishandled? What happens with empty fields, special characters, very long strings, concurrent requests? This is less structured than the other techniques — and often more effective at finding the bugs that structured testing misses.
Manual Testing Tools
Manual testing doesn't mean working without tools. The right tools make manual testers faster, more systematic, and more effective.
Category | Tools | What It Does |
Test Case Management | TestRail, Zephyr, Xray, Quash | Organises test cases, tracks execution, manages coverage |
Bug Reporting | Jira, Linear, GitHub Issues, Quash | Logs defects with context, tracks resolution |
API Testing | Postman, Insomnia | Tests API endpoints manually without writing code |
Browser DevTools | Chrome DevTools, Firefox DevTools | Inspects network requests, console errors, DOM state |
Screen Recording | Loom, Kap | Captures test sessions for bug reports |
Mobile Testing | Quash, BrowserStack | Tests on real devices, captures logs and crash reports |
Accessibility | Axe, WAVE, NVDA | Audits accessibility issues manually and semi-automatically |
Performance Observation | Chrome Lighthouse, GTmetrix | Surfaces performance issues during manual review |
A few things worth knowing before choosing:
For beginners, start with three: a test management tool (even a simple spreadsheet or Quash's built-in management), Postman for API testing, and your browser's DevTools. These cover 80% of what most QA engineers need day-to-day without a steep learning curve.
For mobile teams, real-device access is non-negotiable. Emulators miss the class of bugs that matter most in production — OEM OS behaviour, memory pressure, touch event handling, hardware-specific rendering. Quash is purpose-built for this: it captures full device context during manual sessions (logs, network data, screenshots, crash reports), lets testers report bugs with a single tap via shake-to-report, and ties those reports directly to Jira or your existing tracker without manual re-entry. That context — the exact device, OS, build, and reproduction steps — is what turns a vague bug report into something a developer can actually fix.
For enterprise QA teams, the priority shifts to traceability: connecting requirements to test cases to results to defect tickets. TestRail and Zephyr handle this well for larger teams. Quash's test management layer does this for mobile-focused teams, with the added advantage of bridging manual test sessions into automated regression over time as flows stabilise.
When to Automate vs. When to Stay Manual: A Decision Framework
This is the question most teams get wrong, usually by trying to automate too much too soon, or by staying manual on flows that are wasting hours of QA time every sprint.
The decision isn't binary. It's a continuous assessment of each test case against three variables:
Frequency: How often does this test run? A test that executes on every build is worth automating. A test that runs quarterly probably isn't — the investment in building and maintaining it often outweighs the time it saves.
Stability: Is the feature under active development? Automating a test for a feature that's changing every sprint means you spend more time updating the test than running it. Automate stable, established functionality. Features in flux wait.
Consequence: What happens if this test misses a bug? Login, payments, core user journeys — high consequence. Admin settings, internal tools — lower consequence. High-consequence flows earn automation priority regardless of technical complexity.
The decision matrix — run every test case through all three filters:
Frequency | Stability | Consequence | Decision |
High | Stable | High | Automate this sprint |
High | Stable | Low | Automate next quarter |
Low | Stable | High | Automate eventually |
Any | Unstable | Any | Keep manual until feature stabilises |
Low | Any | Low | Keep manual indefinitely |
Always keep manual:
Exploratory testing (by definition)
Usability and UX evaluation
Accessibility with assistive technology
New features under active development
AI-generated feature validation (bias, hallucination, contextually wrong outputs)
One-time scenarios unlikely to recur
Always automate:
Login and authentication flows
Core user journeys run before every release
Regression tests for bugs that have reached production before
Data-intensive tests requiring multiple input combinations
Performance benchmarks
Manual Testing in Agile and CI/CD
The traditional model — developers build, QA tests at the end of the sprint — is gone for most teams shipping at speed. Agile and CI/CD change where manual testing fits, but they don't eliminate it. They change when it happens and who it involves.
In Agile, effective QA engineers are involved from the beginning of a sprint, not after development completes. They're in planning sessions asking "what could break?" before code is written. They're reviewing acceptance criteria for testability — flagging ambiguity that will produce bugs if left unaddressed. They're the person who notices that a user story doesn't account for what happens when the user does something unexpected.
This shift — from end-of-process gatekeeper to early collaborator — is the biggest change in how manual testing works in modern development teams. It requires testers who understand the product deeply, not just the test script.
In CI/CD environments, automated tests handle the gates. Every pull request triggers a smoke test. Every merge triggers regression. Manual testing handles what automation can't: exploratory sessions on new features, usability review of recent UI changes, accessibility validation, and the investigation of automated failures that need human interpretation to diagnose correctly.
The pipelines do the repetitive work. Manual testers do the judgment work. These aren't competing — they're the right division of labour.
For mobile teams navigating this split, Quash sits at the intersection: it lets QA engineers run manual sessions on real devices, capture full context for each bug, and then gradually automate the repetitive flows that have stabilised — all within the same platform, without needing a separate automation framework to be built alongside.
What Automation Still Cannot Do
This matters enough to say directly, because the "manual testing is dying" narrative often skips it. Here are the four specific failure modes of automation that explain why human testing is non-negotiable.
Automation can't feel confused. A test script validates that a button exists, is clickable, and produces the expected outcome. It cannot tell you that the button label reads "Proceed to next step" when what users need is to understand whether they're confirming a purchase. Scripts test what software does. Humans test whether software makes sense to the person using it.
Automation can't find intersection bugs. The login flow works. The payment flow works. The session timeout works. But what happens when a user is halfway through a payment, the session times out, they re-authenticate, and the system silently drops their cart? A scripted test for the payment flow won't find this because it doesn't know to combine login expiry with mid-flow recovery. An exploratory tester does — because they're actively probing seams between features, not confirming expected paths.
Automation can't test what it wasn't told to test. Automated suites are backward-looking by design: they confirm that things that worked before still work now. New bugs come from unexpected combinations as software evolves — not from old failure modes. A genuinely novel bug is invisible to a suite that was never programmed to look for it. This is not a limitation that better tooling solves; it's structural.
AI-generated code expands the testing surface, not shrinks it. In December 2025, CodeRabbit analysed 470 open-source pull requests comparing AI-generated code against human-written code across logic, security, maintainability, and performance — and found that AI-generated code produced approximately 1.7x more issues per PR. Logic errors were 75% more common; security vulnerabilities ran 1.5 to 2 times higher. (The full methodology and dataset are published in CodeRabbit's State of AI vs Human Code Generation Report, December 2025.) As AI coding assistants become standard, the volume of code shipped increases and so does defect density. The new categories of bugs that AI-generated code introduces — subtle logic errors, misconfigurations, edge cases the AI couldn't anticipate — are exactly what human judgment is best at catching.
Manual Testing Best Practices
Test early, not just before release. The cost of finding a bug in planning is trivial. The cost of finding it in production is not. Effective QA involvement begins with requirements review, not with a completed feature handoff.
Use session-based test management for exploratory testing. Time-boxed sessions with defined charters — "For 60 minutes, explore the checkout flow as a first-time user" — bring structure to exploratory testing without eliminating its flexibility. Document what you tested, what you found, and what you didn't get to. This turns exploration from an informal activity into something defensible and repeatable.
Write test cases that anyone can reproduce. A test case that only makes sense to the person who wrote it is not a test case — it's a note. Good test cases specify exact preconditions, exact steps, exact expected results. Someone else on the team should be able to execute them without asking for clarification.
Separate test design from test execution. Designing test cases while executing them leads to incomplete coverage. The decision about what to test should happen before the session where you're executing. This is harder than it sounds under sprint pressure, but it's the difference between coverage you can measure and coverage you're assuming.
Report bugs with enough context to reproduce, not just enough to file. Steps, environment, expected vs actual, build number, screenshot or recording. A developer who can reproduce a bug in five minutes will fix it faster than a developer who spends an hour trying to recreate the conditions.
Track what you found through exploration vs. scripted testing. Bugs found through exploratory testing that scripted testing missed are your most valuable evidence for the value of human testing. Document them separately. That track record is worth more than any certification in a hiring conversation.
Treat flaky or undocumented manual test cases as technical debt. If a test case produces different results depending on who runs it, it isn't measuring anything useful. Fix it or remove it. Ambiguous test cases are worse than no test cases because they consume time while providing false confidence.
Manual Testing Metrics That Matter
Not every metric reflects actual quality. The ones that do:
Defect Detection Efficiency (DDE): Defects found during testing divided by total defects found (including post-release). Higher DDE means more bugs caught before users see them. This is the number that matters most to stakeholders.
Defect Escape Rate: Bugs that reach production as a percentage of all bugs found. Track this over time. A falling trend tells you the process is working. A rising trend tells you something in the process is failing — often insufficient exploratory coverage on new features.
Test Coverage: The percentage of requirements, user stories, or defined scenarios that have corresponding test cases. Coverage alone doesn't mean quality — 100 shallow test cases can achieve 100% coverage while missing every important bug. But low coverage is a reliable signal that something important isn't being tested.
Test Execution Rate: Planned tests executed versus planned tests in scope. Chronic under-execution usually signals that manual testing is bottlenecking on time — which is the clearest signal that something should move toward automation.
Bugs Found per Exploratory Session: Not a performance metric, but a health check. Experienced testers who know a product well and have time to explore it should be finding things. If exploratory sessions consistently produce nothing, either the product is unusually stable, the charter is too narrow, or the tester isn't exploring effectively.
Manual Testing vs Automated Testing: The Right Frame
The manual vs. automated debate is a false choice for almost every team. The teams that ship reliable software run both — deliberately, with each covering what the other can't.
Automation handles what is repetitive, predictable, and high-frequency: regression suites on stable flows, smoke tests on every build, performance benchmarks, data-intensive API validation. It does this faster, more consistently, and without a person present.
Manual testing handles what requires a person: exploring edge cases, validating UX, assessing accessibility in context, investigating novel bugs, and making the judgment call that a technically-correct output isn't actually correct for the user.
The PractiTest 2026 State of Testing Report found that only 5% of companies have achieved fully automated testing. That figure has been stable because the 95% aren't behind — they're running a mix of both, because that mix is the right answer for the problems they're solving.
The question is never "should we automate?" It's always "which specific tests should be automated, and when?" The three-filter framework in the section above gives a repeatable answer to that question for any test case in your suite.
Frequently Asked Questions
What is manual testing in software testing?
Manual testing is the process of a human tester evaluating software by executing test cases directly — interacting with the application, observing its behaviour, and comparing results against expected outcomes — without using automation scripts. It includes a range of testing types: exploratory, usability, regression, smoke, sanity, integration, UAT, and accessibility testing, among others.
Is manual testing still relevant in 2026?
Yes. Katalon's 2025 State of Software Quality Report found that 82% of QA professionals still use manual testing in their daily work. The forms of manual testing under pressure are narrow: repetitive, scripted regression execution on stable flows. Exploratory testing, usability evaluation, accessibility testing, and the validation of AI-generated features are growing in importance, not declining.
What is the difference between manual testing and automated testing?
Manual testing uses a human tester to evaluate software through direct interaction. Automated testing uses scripts and frameworks to execute predefined test cases without human involvement. Manual testing applies human judgment, adapts to unexpected behaviour, and evaluates UX and usability. Automation executes faster, runs on every code change, and doesn't tire or vary. Most effective QA strategies use both.
What are the main types of manual testing?
The primary types are: exploratory testing (unscripted investigation), ad hoc testing (informal break-testing), usability testing (evaluating user experience), smoke testing (basic build validation), sanity testing (spot-checking after specific changes), regression testing (verifying existing functionality after changes), integration testing (validating component interactions), system testing (full application evaluation), user acceptance testing (stakeholder sign-off), and accessibility testing (evaluating use by people with disabilities).
When should you automate instead of testing manually?
Automate when a test is high-frequency (runs on every build), high-consequence (a miss would reach production), and tests stable functionality that isn't actively changing. Keep manual when testing exploratory scenarios, usability and UX, new features under development, one-time scenarios, and anything requiring human judgment to evaluate correctly. The three-filter framework — frequency, stability, consequence — applied to each test case gives a consistent answer. Read the full framework: What to Automate First →
What skills do manual testers need in 2026?
Exploratory testing methodology (structured, charter-based sessions). API testing with tools like Postman. Basic understanding of CI/CD pipelines and where testing fits within them. SQL for backend data validation. Familiarity with a test management tool. Domain knowledge specific to the product and industry being tested. Increasingly: the ability to review and validate AI-generated test cases and AI-generated features — a genuinely new and growing part of the role.
What tools do manual testers use?
Test management tools (TestRail, Zephyr, Quash) for organising and tracking test cases. Bug tracking (Jira, Linear) for defect reporting. API testing tools (Postman, Insomnia) for validating backend behaviour. Browser DevTools for inspecting network and DOM state. Screen recording tools (Loom) for capturing evidence. Real device testing platforms (Quash, BrowserStack) for mobile QA. Accessibility auditing tools (Axe, WAVE, NVDA) for compliance testing.
Can manual testing and automation coexist in a CI/CD pipeline?
Yes — and this is the standard for teams shipping at speed. Automated smoke tests and regression suites run on every pull request and merge. Manual testing handles exploratory sessions on new features, usability review, accessibility evaluation, and investigation of automated failures that need human interpretation. Automation handles the gates. Manual testing handles the judgment work. The two aren't competing; they cover different failure surfaces.
How do you do exploratory testing well?
Use session-based test management: define a specific charter for each session ("For 60 minutes, explore the checkout flow as a new user on a mid-range Android device"), time-box it, and document what you tested, what you found, and what you didn't get to. Charter-based exploration is structured enough to produce repeatable, defensible coverage while preserving the flexibility that makes exploratory testing valuable. Track bugs found through exploration separately from bugs found through scripted testing — that differential is your evidence of exploratory testing's value.
How does manual testing fit in Agile development?
In Agile, effective manual testing starts at the beginning of a sprint, not after development is complete. QA involvement includes: reviewing user stories for testability and ambiguity during planning, asking "what could break?" during design, exploring new features during development rather than waiting for a completed handoff, and running exploratory sessions alongside automated regression before release. The shift from end-of-sprint gatekeeper to early collaborator is the most significant change in how manual testing operates in Agile teams.







