Automation has become a core part of modern quality assurance and R&D efficiency. However, many teams struggle with low defect detection, high maintenance costs, misaligned technologies, and difficulty proving real business value.
After years of practice and iteration, the QJIAYI Quality Team has built a mature automation system that now runs 10,000+ UI and API test cases in continuous regression and detects more than 80 bugs per month. This article shares how we transformed automation from a costly experiment into a stable, business-driven capability.
Many teams face similar pain points when building automation systems:
High volumes of automated test cases, but few real defects found.
Frequent business changes lead to heavy script maintenance.
New frameworks and tools often fail to adapt to real business scenarios.
Stakeholders doubt the value of automation and are reluctant to invest.
Long-term investment does not meet expected return on investment.
Teams often jump between new tools and frameworks without solving fundamental problems, creating a cycle of inefficiency. At QJIAYI, we broke this cycle by aligning automation closely with business goals and team workflows.
Rather than chasing universal metrics such as code coverage, pass rate, or CI compliance, we believe automation goals must match the business lifecycle.
In early-stage products, over-emphasizing high test coverage can slow iteration and hurt customer experience.
Neglecting automation during rapid growth leads to online failures, regressions, and damaged brand reputation.
We structured our automation roadmap into three progressive stages:
We built foundational frameworks including:
Apollo API Automation Framework
Hades UI Automation Framework
Data Regression Platform
The goal was to replace repetitive manual tests with stable script execution and improve case writing efficiency.
We developed a unified automation platform for:
Centralized test case management
Standardized execution scheduling
Automated reporting and metric analysis
This platform allowed more team members to participate and improved overall efficiency.
Test engineers took ownership to optimize systems, expand scenarios, and innovate solutions:
Scenario-based and data-driven testing
Image comparison, JSON validation, and data consistency checks
Platform-based testing for internal packages
This stage turned automation from a tool into a team-driven capability.
One-size-fits-all automation rarely works in complex systems. At QJIAYI, our business spans design tools, merchant backends, open platforms, mini-programs, and international services. We use different automation strategies for different technical architectures, including backend services, open APIs, front-end components, and plugins.
We once built an end-to-end automation system covering design, generation, and data production. While it detected real issues, we faced:
Unstable data and inconsistent IDs
High comparison noise and maintenance costs
Difficulty generalizing front-end interactions
High cross-team collaboration costs
This experience taught us to:
Strengthen front-end data validation
Conduct in-depth feasibility research before implementation
Consider both technical and organizational challenges
In the early stages, our metrics looked good—high coverage, high pass rate—but stakeholders still questioned automation value. We took targeted actions:
We reviewed every missed bug to identify gaps in validation, scenario design, and false positives. This made automation more targeted.
Regular reviews improved stability, reduced redundancy, and standardized development practices.
For high-risk businesses, automation became part of project goals from the start. We worked with developers to simplify data construction and improve testability.
We tracked frequent failures, fixed environmental issues, and reduced non-bug CI blockages. This made automation trusted and efficient.
Once automation was stable, we amplified its impact:
Recognized and rewarded teams for automation innovation
Used major tech projects to test and improve automation capabilities
Encouraged internal sharing of practical methods
Connected automation failures directly to bug tickets with logs and quick re-run functions
These steps turned automation into a trustworthy, indispensable part of the development pipeline.
Building automation is easy. Building a sustainable, business-aligned automation system is difficult. Our key takeaways:
Match automation goals to business stages, not just KPIs.
Collaborate closely with product and R&D teams.
Build your automation platform like a real product—with tools, training, and operation mechanisms.
Learn from failures and iterate continuously.
When you focus on making automation stable, effective, and business-oriented, your team and partners will follow.