Quash for Windows is here.Download now

Published on

|

5 mins

mahima
mahima
Cover Image for AI Test Case Generation from PRDs, Figma, and Code

AI Test Case Generation from PRDs, Figma, and Code

Writing test cases manually is one of those tasks that everyone agrees is important and nobody has enough time to do properly. A new feature lands, the PRD is still being updated, the designs are half-annotated, the sprint is already moving. Test cases get written in a rush, cover the happy path, and miss the edge cases that will matter in three weeks when a user finds them in production.

The problem isn't that QA teams don't know how to write good tests. It's that good test case writing requires synthesising context from multiple sources simultaneously — requirement, design, implementation — fast enough to stay ahead of the development cycle. That's a lot to ask of anyone working from a static document with one hand and a sprint board with the other.

The Manual Test Writing Problem: Two Ways It Fails

Manual test case writing fails in two consistent ways.

Coverage failures: tests written from memory or a quick PRD skim miss edge cases, boundary conditions, and cross-flow dependencies that only become obvious when you're looking at the actual design or actual code. A QA engineer who wasn't in the design review doesn't know about the empty state screen that the designer added. A test written before the API was built doesn't know about the validation rules the developer implemented.

Maintenance failures: tests written against a specific implementation go stale when the implementation changes. Every time a feature is updated, someone has to re-synthesise the same context to update the tests — and that update often doesn't happen until a regression fails in production.

The answer isn't to write fewer tests. It's to write them from better sources, faster, with less manual synthesis.

How Does AI Test Case Generation Actually Help?

AI test case generation uses a language model to read your source documents — PRDs, design files, codebases — and extract the testable assertions they contain: what should happen, what constitutes success, what the edge cases are, what error states look like. It turns context that existed but wasn't explicitly in test form into structured, executable test cases. The QA engineer's role shifts from "synthesise and write" to "review and refine."

Ebook Preview

Get the Mobile Testing Playbook Used by 800+ QA Teams

Discover 50+ battle-tested strategies to catch critical bugs before production and ship 5-star apps faster.

100% Free. No spam. Unsubscribe anytime.

How to Generate Test Cases from PRD Automatically

Attach a PRD — PDF or Markdown — to Megumi in Test Studio, and it extracts the user flows, acceptance criteria, and edge cases embedded in the document. Not a summary. The actual testable assertions: what the feature should do, what happens when inputs are invalid, what the error states look like, what constitutes a successful outcome.

The output is a structured test case with a name, step-by-step instructions, expected results, and priority level — ready to execute or edit before saving to the library.

A PM who writes a thorough spec is, without knowing it, also generating the test suite. The specification and the test cases are the same information expressed in two different formats. Megumi handles the translation.

What Makes PRD-Based Generation Useful vs Generic AI Test Generation?

Generic AI test generation — asking ChatGPT to "write test cases for a login screen" — produces plausible-looking test cases that aren't grounded in your specific requirements, your specific acceptance criteria, or your specific edge cases. Megumi works from your actual document, extracting the assertions your product team already agreed on. The test cases it generates are traceable to specific sections of the PRD — which matters for compliance, audit, and sprint review.

Figma to Test Cases: Design-Based Test Generation

Connect a Figma file via the MCP integration and Megumi reads the design directly — component names, interaction states, annotations, and the navigation structure implied by the frame layout. It generates test cases that reflect what the designer actually built, not what someone remembered from a standup.

This matters most for UI-heavy flows where the design contains implicit test conditions that never make it into the PRD: a disabled button state, an empty state screen, a validation message on an input field, a loading skeleton. Megumi surfaces these as test cases. They don't get missed because nobody thought to write them down — because they were always there, in the design, just never translated into test form.

For teams where the designer and the QA engineer work in different tools and rarely overlap, Figma-based generation creates a direct bridge between the design intent and the test coverage.

Code-Based Automated Test Case Generation

Link a GitHub, GitLab, or Bitbucket repository in Apps → Knowledge and Megumi indexes the codebase. It generates tests that reference real API endpoints, field names, and validation logic from the actual implementation — not a generalised approximation of what the feature probably does.

The most useful application is per pull request: select the branch, and Megumi generates tests scoped to the specific changes in that diff. Faster review cycles, test cases precisely calibrated to the risk surface of each code change. A PR that adds a new validation rule gets test cases that specifically test that validation rule — not a generic set of tests for the feature area.

How Is Code-Based Test Generation Different from Traditional Test Coverage Tools?

Traditional coverage tools tell you which lines of code were executed during tests. Code-based test generation reads the implementation and generates test cases for the behaviour it implements — what the code is supposed to do, based on what it actually does. These are complementary: coverage tools tell you if your tests ran the code; generated test cases help you write tests that validate the code's behaviour.

The Combined Approach: PRD + Figma + Code Together

Used individually, each source produces useful tests. Used together, they produce something closer to genuine comprehensive coverage.

A PM uploads the spec. A designer links the Figma frames. A developer selects the feature branch. Megumi synthesises all three into a single test suite — reflecting the requirement, the design, and the implementation simultaneously. No single person on the team has to hold all of that context at once. Megumi does.

The result: test cases exist before code ships. A QA engineer who would have spent two days manually writing tests after the feature was built now has a reviewed test suite ready when the feature lands. The sprint doesn't wait for test coverage — test coverage waits for the sprint.

Frequently Asked Questions

What is AI test case generation? AI test case generation uses a language model to automatically create structured test cases — with names, step-by-step instructions, expected results, and priority levels — from source documents like PRDs, designs, or code. Quash's Megumi agent generates test cases from PRDs (PDF/Markdown), Figma files, and linked GitHub/GitLab/Bitbucket repositories.

Can you generate test cases from a PRD automatically? Yes. Attach a PDF or Markdown PRD to Megumi in Quash's Test Studio. Megumi extracts user flows, acceptance criteria, and edge cases and generates structured, executable test cases ready to run or edit. The generated test cases are traceable to the specific requirements in the PRD.

How does Figma-based test case generation work? Connect a Figma file via the MCP integration in Quash. Megumi reads component names, interaction states, and annotations directly from the design and generates test cases reflecting what was designed — including edge cases like empty states and disabled button states that often don't make it into the PRD.

What is the best AI tool for generating test cases in 2025? Megumi — Quash's purpose-built test generation agent — generates test cases from PRDs, Figma, and code (GitHub, GitLab, Bitbucket). It works conversationally inside Test Studio, remembers session context, and outputs test cases that are directly executable by Mahoraga without additional formatting or transformation.

Can AI generate test cases from code changes in a pull request? Yes. Link your repository in Apps → Knowledge and Megumi indexes the codebase. Select a specific branch and Megumi generates test cases scoped to the changes in that branch — referencing real API endpoints, field names, and validation logic from the actual implementation rather than a generalised description.