Why We're Expanding Beyond Mobile Testing
Every product decision we've made in the last two years has been a response to something real: a failure mode we kept seeing, a request we kept getting, a gap we couldn't ignore. The decision to expand Quash beyond mobile testing is no different.
This isn't a pivot. It's a logical extension of what Quash's execution model already makes possible. But the context matters — so here's the honest version of how we got here and what we're actually building.
What Users Have Been Asking For
When teams started using Quash seriously — not just evaluating it, but actually building QA processes on it — the feedback had a consistent shape. The core execution experience was strong. The gaps they reported weren't about Mahoraga's performance on mobile. They were about everything around it.
Backend validation kept coming up. Teams would run a UI test, see it pass, and then find out that the backend had failed silently. The test said success. The backend said otherwise. We built backend validations to close that gap — API calls that fire mid-execution via @slug syntax, validating backend state against UI state inside the same test run.
Test generation from real source documents was the other consistent ask. Teams were spending time manually writing test cases that could have been derived directly from the PRD, the design, or the codebase. Megumi — our AI test generation agent — came out of that feedback. Attach a PRD, link a Figma file, connect a GitHub repo, and Megumi generates structured, executable test cases from the actual source of truth rather than from memory.
Windows support unblocked QA teams who had been working around Quash rather than with it. A significant share of QA teams — particularly in enterprise environments and Southeast Asian markets — couldn't access local execution without a Mac. The Windows desktop app removes that barrier entirely.
Integrations came up constantly: Jira for bug tracking, Slack for failure notifications, CI/CD pipelines for release gating. We've shipped Slack notifications, full CI/CD integration (GitHub Actions, CircleCI, Vercel, Jenkins), and MCP for agentic workflow integration. Jira and Notion are on the roadmap.
Each of these filled a gap. But across all of it, the larger pattern kept surfacing: the problem wasn't that any individual tool was missing. It was that QA tooling, in aggregate, was fragmented in a way that created overhead just from managing the toolchain itself.

Get the Mobile Testing Playbook Used by 800+ QA Teams
Discover 50+ battle-tested strategies to catch critical bugs before production and ship 5-star apps faster.
The Bigger Gap: Why Fragmented QA Tooling Is Its Own Problem
A typical mobile QA stack looks something like this: one tool for test case management, another for execution, a third for reporting, a fourth for API testing, manual tracking in Jira, CI/CD configuration in YAML files that someone has to maintain, and Slack for the human coordination layer that glues it together when something breaks.
Every handoff between tools is a place where context gets lost, state gets out of sync, or someone has to do manual work to make two systems agree on what happened. The execution report from the test runner doesn't link directly to the test case in the management tool. The bug created in Jira doesn't reference the execution report. The nightly CI run failed, but nobody checked the dashboard, so nobody knows until someone looks at the PR status badge and traces backward.
The overhead isn't in any single tool. It's in managing a toolchain that was never designed to work together.
Quash's answer to this has always been consolidation: one workspace where test generation, execution, management, and reporting live together, shared by the whole team. That's been the mobile story. The expansion into web testing, CI/CD-native validation, and API testing is the same story at a larger scope.
What We're Building: Web Testing, CI/CD Integration, and API Validation
Can Quash Test Web Apps as Well as Mobile?
Full web app testing is the most significant expansion on the roadmap. Mahoraga's vision-based execution engine — the same engine that executes plain English instructions on mobile apps by reading the live screen — is being extended to the browser. Describe a web flow in plain English: login, search, form submission, checkout. Mahoraga navigates the web app the same way it navigates mobile: reading the live page, adapting to dynamic content, handling navigation, capturing step-by-step evidence.
The goal is one platform for both surfaces. A team testing a mobile app that also has a web version shouldn't need Selenium, Playwright, or Cypress alongside Quash. One project, one test case library, both platforms — mobile and web — covered in the same suite and reported in the same execution report.
CI/CD-native quality gates are the other major push. Quash already integrates with GitHub Actions, CircleCI, Vercel, and Jenkins, with exit codes that gate merges on test results. What's coming is tighter integration: automatic test suite selection based on what changed in a PR, faster execution feedback designed specifically for PR-level validation, and result summaries that surface directly in PR comments without requiring anyone to open a separate dashboard.
API testing as a first-class layer builds on the backend validation foundation. The current implementation fires API calls mid-UI-test via @slug syntax. The roadmap extends this to standalone API test suites — sequences of API calls with assertions, chained together, runnable independently or embedded in UI flows.
The Key Technical Challenge: Why Web Testing Is Not Just Flipping a Switch
The extension of Mahoraga to web isn't trivial, and it's worth being specific about why.
Mobile execution has specific properties that Mahoraga was built around: a sandboxed application environment, a well-defined accessibility API (Android's UIAutomator / accessibility tree), and a relatively predictable rendering model. Mahoraga reads the accessibility tree on mobile the way a screen reader does — it has structured, semantic information about what's on screen.
The web is different. DOM structure varies dramatically across frameworks. Single-page apps render and re-render asynchronously. Shadow DOM components hide internal structure. Iframes break the document model. A page that looks simple from the user's perspective may have a deeply nested, dynamically generated DOM structure underneath.
Extending Mahoraga to web required rethinking the screen reading layer for an environment where semantic structure is less reliable than on mobile. The vision-based approach — reading the rendered page visually rather than relying solely on DOM structure — is the mechanism that bridges this. Mahoraga identifies UI elements by what they look like and where they are on the rendered page, supplemented by DOM information where it's available and reliable.
This is why "web testing" isn't just flipping a switch. The execution model is fundamentally the same — plain English instructions, intent-based execution, no selectors — but the underlying screen reading layer had to be rebuilt for a different execution environment.
The Long-Term Vision: End-to-End STLC Automation
The Software Testing Life Cycle, as most QA teams experience it, involves a sequence of steps that today require multiple tools, multiple handoffs, and significant manual coordination: requirements analysis, test planning, test case design, environment setup, execution, defect tracking, and reporting.
The long-term vision for Quash is to automate as much of that sequence as possible — not replacing human QA judgment, but eliminating the mechanical work that gets in the way of it. Requirements come in as a PRD. Megumi generates test cases. Mahoraga executes them. Reports are generated automatically. Failures surface in the tools where the team already works. CI gates on the results.
The interesting work in QA — deciding what to test, interpreting ambiguous failures, understanding what a regression means for a release — stays human. The mechanical work — writing scripts, maintaining selectors, formatting reports, coordinating between tools — increasingly doesn't have to be.
We're not there yet. Web testing is the next step. CI/CD-native quality gates are the step after. API testing as a first-class layer is the step after that. The direction is clear. The timeline is honest: web testing is coming. What's available today is a platform that's already closed more of this gap than most mobile QA teams have access to anywhere else.
Frequently Asked Questions
Does Quash support web app testing? Web app testing is on the roadmap. Mahoraga's vision-based execution engine is being extended to the browser — the same plain English instruction format, the same execution model, applied to web flows. When it ships, teams will be able to run mobile and web tests from the same Quash workspace. Current support covers Android and iOS simulators.
What is the difference between mobile testing and web testing technically? Mobile testing runs against native app code in a sandboxed environment, using platform accessibility APIs to read screen state. Web testing runs against a browser rendering a web application, with DOM structure, dynamic rendering, and framework variability as the main technical challenges. Mahoraga's approach to both is vision-based — reading the rendered screen rather than relying solely on code structure.
What integrations does Quash currently support? Quash integrates with GitHub Actions, CircleCI, Vercel, and Jenkins for CI/CD. Slack for failure notifications. GitHub, GitLab, and Bitbucket for codebase-linked test generation. MCP for agentic workflow integration with Claude, Cursor, and other AI tools. Jira and Notion integrations are on the roadmap.
What is end-to-end STLC automation? STLC (Software Testing Life Cycle) automation means automating the sequence of steps from test case generation through execution, reporting, and defect tracking. Quash's long-term vision is to automate as much of this cycle as possible: requirements in, test suite out, execution automated, results surfacing in the tools the team already uses.
Will Quash replace Selenium or Playwright for web testing? The goal is to offer an alternative for teams that don't want to write Playwright scripts or maintain Selenium selectors. Teams with existing Playwright or Selenium suites can continue using them — Quash's web testing will complement them or replace them depending on what the team prefers.




