At a time when software R&D iterations are accelerating, the testing process often falls into a triple dilemma: "difficult to guarantee quality, difficult to improve efficiency, and difficult to control costs."
By building a "BDD (Behavior-Driven Development) driven HTA (HeadlessTA)" automated testing system, the Trip.com (Ctrip) technical team successfully cracked the pain points of manual testing and traditional automated testing. This initiative achieved multi-dimensional breakthroughs in test coverage, R&D efficiency, and cost control, providing a reusable paradigm for testing transformation in complex business scenarios.
Trip.com’s early testing system was dominated by manual efforts. As business complexity increased, two core pain points became apparent.
Manual testing relies heavily on the tester's experience.
Missed Scenarios: Boundary scenarios and abnormal paths are often overlooked.
Ambiguity: Use cases are often stored in "memory" or written with fuzzy descriptions, leading to misunderstandings between R&D and QA.
Example: A requirement to "echo pedestrian information" lacks defined fields or trigger methods.
Result: High communication costs and high error rates.
After introducing Jest-based HTA automated testing to alleviate manual pressure, new problems arose:
High Maintenance: HTA use cases had to be written manually. The efficiency was only 2-3 items/hour, consuming 25% of total R&D costs.
Limited Logic Coverage: Manual scripting struggled with complex branch logic, resulting in only 50% code line and branch coverage—far below the quality goals of 90% and 70%, respectively.
With the business entering a "high-frequency iteration + multi-scenario adaptation" stage, a solution that offered standardized use cases, automated generation, and low maintenance costs was urgently needed.
The team built a four-layer automated testing system centered on "standardized use cases, automatic generation, efficient management, and visual debugging."
Goal: Eliminate ambiguity and unify standards.
The core value of BDD is transforming vague business requirements into precise, executable test steps.
Comparison: Traditional vs. BDD
❌ Traditional Use Case:
"Enter the ticket filling page, select 1 adult and 1 child, and the crowd title is expected to be displayed."
(Vague steps, undefined operation entries, unclear text verification).
✅ BDD Use Case:
"Click [Add number 01] under [SKU module] (add one adult) → click [Add number 02] under [SKU module] (add one child) → expect [Passenger module] to display 'One adult ticket needs to be selected'."
By using a structure of "Module Positioning + Operation Description + Expected Results," understanding bias is eliminated. The team defined 8 core BDD syntax types covering all scenarios, ensuring use cases can be parsed by machines.
Goal: From "manual coding" to "use-case-to-code" generation.
This is the core engine of efficiency. The team achieved zero manual writing of test code through a three-step process:
Environment Preparation: Configure the Jest environment and hta.config.js to specify project types, page entrances, and templates (e.g., specific templates for "Attraction Details" with mock data injection).
Use Case Analysis: A parsing engine converts BDD natural language into machine instructions.
Input: "Click [Select/Add] under [Pedestrian Module]"
Parsed: "Locate element with testID 'Pedestrian Module' and trigger click event."
Code Generation: Combining templates and parsed results to generate executable Jest code.
Example Generated Code:
code JavaScript
downloadcontent_copy
expand_less
it("traveler selection logic", async () => {
// Simulate click to increase adult
await expectAttrsExistsAsync(screen, "SKU module", {
children: ["Add copies 01"]
});
fireEvent.click(addAdultBtn);
// Verify expected results
expectAttrsExistsAsync(screen, "Passenger module", {
children: ["An adult ticket needs to be selected"]
});
});
Result: Writing efficiency improved from 2-3 items/hour to minute-level generation.
Goal: From "dispersed storage" to "platform control."
Traditional local storage of Mock data was dispersed and hard to reuse. The team introduced a Mock Use Case Platform:
Persistent Storage: Git-based storage supporting multi-person collaboration via a visual interface.
Real-time Preview: Testers can append mockId=xxx to page links to verify data validity before testing.
NPM Package Release: Data is automatically packaged into NPM modules via pipelines, creating a closed loop of "Update → Release → Call," increasing HTA run speed by 30%.
Goal: Lowering the threshold and accurate troubleshooting.
To solve the difficulty of debugging in headless environments, the "Test Case Platform" provides:
Step Result Mapping: One-to-one mapping between HTA results and BDD steps (Green = Success, Red = Failure).
Code Preview: Click a step to view the generated code.
Precise Error Prompts: Failures display specific reasons (e.g., "Text 'xxx' not found under current testID"), helping distinguish between description errors and business bugs.
Data from the "Attraction Details" business scenario demonstrates the transformation's value:
Code Line Coverage: Increased from 71% → 91%.
Branch Coverage: Increased from 60% → 74%.
Risk Reduction: Potential online risks reduced by 60%.
Optimization: While steps increased (823 to 1852) for better granularity, the number of use case files dropped (500 to 214), reducing maintenance costs by 40%.
Project Cycle: Reduced from 15.5 days to 14 days.
Release Efficiency: Increased by 10%.
Manpower Structure: The Developer-to-Tester ratio optimized from 4.4:1 to 7:1. Testing manpower requirements dropped by 30%, freeing resources for core business development.
Test Cost Ratio: Dropped from 25% to 5%.
Example: For a 10-person-day project, testing cost dropped from 2.5 days to 0.5 days.
Maintenance: Mock data reuse rate increased by 50%.
Additionally, an "Automated Quality Access Control" was implemented in GitLab CI/CD, blocking merges if coverage falls below standards (Line >90%, Branch >70%).
While current pain points are resolved, the team is exploring deeper automation:
Mock Data Auto-Collection: Integrating with user behavior recording platforms to automatically generate Mock use cases from logs, eliminating manual data entry.
Pipeline Standardization: Integrating visual debugging into standard pipelines to support batch running and remove local environment dependencies.
The Trip.com BDD-driven HTA practice demonstrates that automated testing transformation is not just "tech stacking," but a system reconstruction centered on "Business Behavior."
By using BDD to standardize requirements and automation tools to close the loop (Use Case → Code → Test → Verification), the team achieved the ultimate goal of "Guaranteed Quality, Improved Efficiency, and Controlled Costs." This shift from "Human-Driven" to "Behavior-Driven" paves the way for moving from passive testing to active quality assurance.
Source: TesterHome Community