Documentation Index
Fetch the complete documentation index at: https://quashbugs.com/docs/llms.txt
Use this file to discover all available pages before exploring further.
Write instructions as if briefing a person
Mahoraga reads your instruction and executes it step by step, exactly as written. It does not infer what you meant — it follows what you said. Write instructions the way you would brief a new team member who has never used the app before. Name every UI element explicitly. “Tap the Sign In button” is unambiguous. “Tap the button” is not. Use the exact label shown in the app — if the button says Continue, write “Tap the Continue button”, not “Tap Next.” Describe steps in the order they happen. Do not skip navigation steps assuming the agent will figure them out. Handle interruptions explicitly. If your app shows permission dialogs, onboarding tooltips, or banners that might appear during the flow, add a line: “If a notification permission dialog appears, tap Allow.”Make expected results explicit
The Expected Result field is what determines whether a test passes or fails. Vague expected results produce unreliable verdicts. Vague: “Error message shows.” Explicit: “An error message reading ‘Incorrect password’ appears below the password field. The user remains on the login screen and the password field is cleared.” The expected result should describe the final state of the app after all steps have completed — what is visible on screen, what has changed, and what should not have changed.One scenario per test case
Each test case should test one specific scenario. Avoid combining multiple distinct flows into a single instruction. Too broad: “Test the entire login flow including valid login, invalid password, forgot password, and account locked.” Right size: Four separate test cases — one per scenario. Focused test cases are easier to debug when they fail, easier to reuse across different suites, and produce cleaner reports.Use Priority to make execution decisions
Priority is not just a label — it determines which tests run when time or resources are limited.- Critical tests should run on every PR, on every device, before any release
- High tests should run as part of your standard regression suite
- Medium tests should run on a schedule or before major releases
- Low tests can run when you have capacity or specifically suspect the area
Tag everything
A test case with no tags is isolated — you cannot filter for it, you cannot group it without opening it, and you cannot build targeted suites around it. Tags are how you navigate at scale. Agree on a tagging convention with your team and apply it consistently to every test case, including ones generated by Recipe. Generated test cases often come with recipe-level tags — review them and add feature or journey tags as needed. Useful tag combinations:checkout + happy-path, login + error-handling, profile + regression.
Review generated test cases before relying on them
Test cases generated by Recipe are a strong starting point, but they should be reviewed before being added to a release-critical suite. Check:- Does the instruction reflect how the UI actually works today?
- Are UI element names accurate and current?
- Is the expected result specific enough to catch a real failure?
- Is the priority assigned correctly for your team’s conventions?