Quash for Windows is here.Download now

Published on

|

8 mins

mahima
mahima
Cover Image for What to Automate First: A Decision Framework for New QA Teams

What to Automate First: A Decision Framework for New QA Teams

Every new QA team hits the same wall.

You have a growing set of test cases. The team agrees automation is the next step. Someone sets up a framework. And then comes the question that actually determines whether this effort succeeds or fails:

What do we automate first?

Most teams don’t get this wrong because they lack skill. They get it wrong because they choose based on intuition instead of a system.

They automate what’s visible. What’s urgent. What someone asked for last week.

Three months later, the suite is fragile, maintenance is eating into every sprint, and trust in automation starts to drop.

The problem isn’t automation. It’s the order in which you automate.

This guide gives you a practical decision framework to choose the right tests first, so your automation effort compounds instead of collapsing.

The mistake most teams make

When teams begin automation, they often start with UI flows.

Login screens. Dashboards. Forms. End-to-end journeys.

It feels logical. These are the flows users interact with. These are the ones stakeholders care about.

But in practice, UI tests are:

  • slower to run

  • more sensitive to UI changes

  • more expensive to maintain

A button label changes, a layout shifts, or a selector breaks — and suddenly your tests fail even though the product still works.

This is why most testing strategies follow a layered approach:

  • Unit tests → fast, stable, cheap

  • Integration tests → moderate cost, moderate stability

  • End-to-end tests → slowest, most fragile

New teams often unintentionally build the opposite — a top-heavy suite dominated by UI tests.

The result: Slow pipelines. Frequent failures. Low trust.

The framework below helps you avoid that.

Ebook Preview

Get the Mobile Testing Playbook Used by 800+ QA Teams

Discover 50+ battle-tested strategies to catch critical bugs before production and ship 5-star apps faster.

100% Free. No spam. Unsubscribe anytime.

The decision framework: four questions

Before automating any test case, run it through these four filters.

These aren’t rigid rules — they’re practical lenses that help you prioritize correctly.

1. How often will this test run?

Automation pays off when tests run frequently.

  • Runs on every PR → strong candidate

  • Runs daily → strong candidate

  • Runs once a month → usually not worth it

Example:

  • Login flow used in every release → automate

  • Quarterly admin export check → keep manual

Rule of thumb: If a test runs rarely, automation effort won’t amortize.

2. Is the feature stable?

Automating unstable features creates churn.

If the UI, logic, or flow is still evolving, your test will break every sprint — not because the product is wrong, but because it’s changing.

Example:

  • Newly redesigned onboarding flow → wait

  • Mature authentication system → automate

Practical heuristic: If a feature has gone a couple of sprints without major changes, it’s usually stable enough.

3. What happens if this test misses a bug?

This is your business impact filter.

Not all failures are equal.

  • Login failure → blocks all users

  • Payment failure → revenue impact

  • Minor settings bug → limited impact

Example:

  • Payment checkout flow → automate early

  • Profile theme toggle → can wait

High impact = high priority, even if the test is complex.

4. What will it cost to maintain?

Some tests are stable once written. Others break every time UI shifts.

Before automating, ask:

“If this screen changes slightly, does the test survive?”

Example:

  • API contract validation → stable

  • Pixel-sensitive UI flow → fragile

This is where modern tools can help.

Instead of relying purely on brittle selectors, platforms like Quash allow tests to be defined at an intent level, so they can adapt to UI changes more gracefully.

But even with better tooling, maintenance cost should always be part of the decision.

The decision table (use this in practice)

Here’s a simple way to evaluate any test:

Factor

High Score (Good for Automation)

Low Score (Avoid for Now)

Frequency

Runs on every PR / daily

Rare or release-only

Stability

Feature is stable

Still evolving

Impact

Breaks critical flows

Minor inconvenience

Maintenance

Resilient to change

Breaks easily

Automate only when most signals are positive.

What to automate first (priority order)

Once you evaluate your tests, the prioritization becomes clear.

1. Smoke tests (start here)

This is your highest-leverage investment.

A small set of tests (10–15) that answer:

  • Can users log in?

  • Does the core flow complete?

  • Does the app launch without crashing?

These run on every build and quickly tell you if the build is usable.

Example:

  • Login → Home screen → Core action → Success state

If this fails, nothing else matters.

2. Regression tests for production bugs

Every bug that reaches production reveals a gap.

When you fix a bug, add a test for it.

Over time, your production issues become your strongest regression suite.

Example:

  • A crash in checkout last month → becomes a permanent test

This builds coverage based on real risk, not assumptions.

3. High-frequency, high-impact flows

Now expand using the framework.

Typical candidates:

  • Authentication flows

  • Core user journeys

  • Payment/subscription flows

  • Critical backend validations

These run often, matter a lot, and are usually stable.

What should stay manual (for now)

Automation is not the goal. Effective testing is.

Keep these manual:

  • Exploratory testing on new features

  • Flows still under active development

  • Rare edge cases

  • UX and visual judgment scenarios

Manual testing is essential where adaptability and human intuition matter.

A practical starting point: your first 10 tests

Most teams struggle because they start too big.

Start small. Be deliberate.

Process:

  1. List all tests run before release

  2. Filter using the four-question framework

  3. Rank by business impact

  4. Pick the top 10

Run these on every PR for a few weeks.

Only expand when:

  • failures are meaningful

  • flakiness is low

  • the team trusts the results

A small trusted suite beats a large ignored one.

The trap that kills automation efforts

The biggest mistake is not tool choice.

It’s skipping stabilization.

Teams often:

  • write 10 tests

  • immediately add 40 more

  • ignore flaky failures

Soon:

  • pass rates drop

  • failures become noise

  • trust disappears

Better approach:

  • build in batches

  • stabilize before expanding

  • fix flakiness early

Progress feels slower, but results compound.

Applying this to mobile testing

The same framework applies to mobile, but with added complexity.

Device fragmentation

A test stable on one device may fail on others.

When evaluating stability, consider:

  • OS versions

  • device types

  • screen variations

Higher maintenance pressure

Mobile UI changes frequently, and traditional locator-based approaches can break more often.

This is why mobile teams increasingly move toward:

  • intent-based automation

  • adaptive execution systems

  • reduced reliance on brittle selectors

Tools like Quash approach this by executing tests based on user intent rather than fixed scripts, allowing tests to adapt to UI changes across Android and iOS.

The key shift is this:

From scripting steps → to defining outcomes.

The short version

If you want one clear answer:

Start with your smoke tests.

  • 10–15 core flows

  • run on every build

  • stabilize them fully

Then expand carefully using the framework.

That’s how automation becomes an asset instead of overhead.

Also read

Regression Testing: The Complete Guide (2026) How to Switch from Manual to Automated Testing Creating Regression Test Suites in Agile Teams → CI/CD Pipelines in AI-Powered Test Automation →

Frequently Asked Questions

What should new QA teams automate first?

Start with smoke tests covering critical flows like login and core functionality. Then add regression tests for production bugs and expand into high-frequency, high-impact scenarios.

How do you decide what to automate?

Evaluate each test based on frequency, stability, business impact, and maintenance cost. Automate tests that score well across all four.

What tests should not be automated?

Exploratory testing, unstable features, rare edge cases, and scenarios requiring human judgment should remain manual.

How many tests should you start with?

Around 10 is a practical starting point. The goal is not the number, but building a stable, trusted foundation.

Why do automation efforts fail early?

Common reasons include automating unstable features, expanding too quickly, and not addressing flaky tests before scaling.

Final CTA

Building mobile automation but not sure where to start?

See how Quash helps teams generate and execute tests without building fragile frameworks from scratch → How Quash works?