Quash is powered by two AI agents. They have distinct roles — one thinks, one acts. Understanding what each one does, and where the boundary between them is, is the foundation for using Quash effectively.Documentation Index
Fetch the complete documentation index at: https://quashbugs.com/docs/llms.txt
Use this file to discover all available pages before exploring further.
Megumi— the test generation agent
Megumi is the agent you work with in Test Studio. Its job is to take context — your app, your designs, your requirements, your prompts — and turn it into structured, executable test cases. Megumi does not execute tests. It does not touch your device. It reads, reasons, and writes. The closest human equivalent is a senior QA engineer who has thoroughly read everything about your product before writing a single test. They know the PRD. They have looked at the designs. They understand what the app is supposed to do, what can go wrong, and which edge cases matter. When they write a test, it is specific, it is complete, and it includes things you might not have thought to ask for. Megumi works the same way — but it gets that knowledge from what you give it. A prompt with no context produces generic steps. A prompt backed by your GitHub branch, a Figma file, a PRD, and a Jira ticket produces tests that reference your actual screen names, your real API endpoints, and your documented acceptance criteria. What Megumi accepts:- Plain English prompts describing a feature or flow
- Attached documents — PRDs, specs, feature briefs (.pdf, .doc)
- Figma designs — screen layouts, interaction states, error states
- GitHub repository — API contracts, data models, business logic
- Jira issues — user stories, acceptance criteria, bug reports
- Follow-up prompts that refine, adjust, or extend what was already generated
- Named test cases with step-by-step instructions
- Expected results — the pass condition for each test
- Priority levels — Critical, High, Medium, Low
- Edge cases and error scenarios you did not explicitly ask for
- Tests you can save to your library or push directly into a suite
Mahoraga — the execution agent
Mahoraga is the agent that acts. It runs on your Android device or emulator, navigates your app, and carries out the instructions in your test cases. Mahoraga does not generate tests. It executes them. It installs automatically on connected devices and emulators. On emulators it comes pre-configured. On physical devices you enable accessibility permissions once after first install. From that point it works silently in the background — you do not interact with it directly. What Mahoraga does during a test run:- Launches your app on the connected device
- Reads UI elements using Android’s accessibility APIs
- Executes each step — taps, swipes, scrolls, text input, navigation
- Captures screenshots at each step
- Records the full session as a video
- Evaluates whether the expected result was met
- Reports the outcome back to Quash
How they work together
Megumi and Mahoraga are separate but sequential. Megumi produces the plan. Mahoraga carries it out.Why two agents instead of one
The split is intentional. Test generation and test execution are fundamentally different problems. Generation is a reasoning problem — it requires reading documents, understanding intent, inferring edge cases, and producing structured output. Megumi is optimised for this. It is a language model purpose-built for QA test generation. Execution is an interaction problem — it requires real-time vision, precise targeting of UI elements, handling unexpected states, and capturing evidence. Mahoraga is optimised for this. It runs on-device, directly against your app’s accessibility layer. Combining them into one agent would mean compromising both. Keeping them separate means each one does its job at the highest level.Supported platforms
| Agent | Android | iOS |
|---|---|---|
| Megumi | ✓ Generates tests for Android | ✓ Generates tests for iOS |
| Mahoraga | ✓ Physical devices, emulators, cloud | iOS simulators only (Mac) |