Skip to main content

Documentation Index

Fetch the complete documentation index at: https://quashbugs.com/docs/llms.txt

Use this file to discover all available pages before exploring further.

Quash is powered by two AI agents. They have distinct roles — one thinks, one acts. Understanding what each one does, and where the boundary between them is, is the foundation for using Quash effectively.

Megumi— the test generation agent

Megumi is the agent you work with in Test Studio. Its job is to take context — your app, your designs, your requirements, your prompts — and turn it into structured, executable test cases. Megumi does not execute tests. It does not touch your device. It reads, reasons, and writes. The closest human equivalent is a senior QA engineer who has thoroughly read everything about your product before writing a single test. They know the PRD. They have looked at the designs. They understand what the app is supposed to do, what can go wrong, and which edge cases matter. When they write a test, it is specific, it is complete, and it includes things you might not have thought to ask for. Megumi works the same way — but it gets that knowledge from what you give it. A prompt with no context produces generic steps. A prompt backed by your GitHub branch, a Figma file, a PRD, and a Jira ticket produces tests that reference your actual screen names, your real API endpoints, and your documented acceptance criteria. What Megumi accepts:
  • Plain English prompts describing a feature or flow
  • Attached documents — PRDs, specs, feature briefs (.pdf, .doc)
  • Figma designs — screen layouts, interaction states, error states
  • GitHub repository — API contracts, data models, business logic
  • Jira issues — user stories, acceptance criteria, bug reports
  • Follow-up prompts that refine, adjust, or extend what was already generated
What Megumi produces:
  • Named test cases with step-by-step instructions
  • Expected results — the pass condition for each test
  • Priority levels — Critical, High, Medium, Low
  • Edge cases and error scenarios you did not explicitly ask for
  • Tests you can save to your library or push directly into a suite
Megumi is also conversational. If the first output is not right, keep prompting in the same session. Megumi remembers everything discussed and builds on it — you are not starting over each time.

Mahoraga — the execution agent

Mahoraga is the agent that acts. It runs on your Android device or emulator, navigates your app, and carries out the instructions in your test cases. Mahoraga does not generate tests. It executes them. It installs automatically on connected devices and emulators. On emulators it comes pre-configured. On physical devices you enable accessibility permissions once after first install. From that point it works silently in the background — you do not interact with it directly. What Mahoraga does during a test run:
  • Launches your app on the connected device
  • Reads UI elements using Android’s accessibility APIs
  • Executes each step — taps, swipes, scrolls, text input, navigation
  • Captures screenshots at each step
  • Records the full session as a video
  • Evaluates whether the expected result was met
  • Reports the outcome back to Quash
Mahoraga is also what makes bounding box calibration necessary on physical devices. Because screen densities vary across hardware, Mahoraga uses a yellow overlay to show where it detects UI elements. If the overlay is misaligned, taps land in the wrong place. Calibrating the X/Y offset in the Mahoraga app fixes this.

How they work together

Megumi and Mahoraga are separate but sequential. Megumi produces the plan. Mahoraga carries it out.
You write a prompt

Megumi reads your context and generates test cases

You review, save to library, add to a suite

You trigger a run

Mahoraga executes each test case on your device

Quash generates an execution report
They never operate at the same time on the same task. Megumi runs when you are in Test Studio building tests. Mahoraga runs when you trigger a task or suite on a device.

Why two agents instead of one

The split is intentional. Test generation and test execution are fundamentally different problems. Generation is a reasoning problem — it requires reading documents, understanding intent, inferring edge cases, and producing structured output. Megumi is optimised for this. It is a language model purpose-built for QA test generation. Execution is an interaction problem — it requires real-time vision, precise targeting of UI elements, handling unexpected states, and capturing evidence. Mahoraga is optimised for this. It runs on-device, directly against your app’s accessibility layer. Combining them into one agent would mean compromising both. Keeping them separate means each one does its job at the highest level.

Supported platforms

AgentAndroidiOS
Megumi✓ Generates tests for Android✓ Generates tests for iOS
Mahoraga✓ Physical devices, emulators, cloudiOS simulators only (Mac)
Megumi can generate tests for both platforms regardless of what device you have connected. Mahoraga’s execution support depends on device type — see Devices for the full breakdown.