Customer Cases
Pricing

Breaking the Automation Deadlock: How Trip.com Boosted Efficiency with a BDD-driven HTA Architecture

Discover how Trip.com revolutionized its QA process using a BDD-driven HTA architecture. Learn how they achieved 91% code coverage, reduced testing costs by 80%, and automated test code generation.

At a time when software R&D iterations are accelerating, the testing process often falls into a triple dilemma: "difficult to guarantee quality, difficult to improve efficiency, and difficult to control costs."

By building a "BDD (Behavior-Driven Development) driven HTA (HeadlessTA)" automated testing system, the Trip.com (Ctrip) technical team successfully cracked the pain points of manual testing and traditional automated testing. This initiative achieved multi-dimensional breakthroughs in test coverage, R&D efficiency, and cost control, providing a reusable paradigm for testing transformation in complex business scenarios.

Transformation Background: From Manual Testing to the HTA Dilemma

Trip.com’s early testing system was dominated by manual efforts. As business complexity increased, two core pain points became apparent.

1. Low Coverage and Chaotic Use Case Management

Manual testing relies heavily on the tester's experience.

  • Missed Scenarios: Boundary scenarios and abnormal paths are often overlooked.

  • Ambiguity: Use cases are often stored in "memory" or written with fuzzy descriptions, leading to misunderstandings between R&D and QA.

    • Example: A requirement to "echo pedestrian information" lacks defined fields or trigger methods.

  • Result: High communication costs and high error rates.

2. The Bottleneck of Traditional Automation

After introducing Jest-based HTA automated testing to alleviate manual pressure, new problems arose:

  • High Maintenance: HTA use cases had to be written manually. The efficiency was only 2-3 items/hour, consuming 25% of total R&D costs.

  • Limited Logic Coverage: Manual scripting struggled with complex branch logic, resulting in only 50% code line and branch coverage—far below the quality goals of 90% and 70%, respectively.

With the business entering a "high-frequency iteration + multi-scenario adaptation" stage, a solution that offered standardized use cases, automated generation, and low maintenance costs was urgently needed.

Core Solution: The Four-Layer BDD-Driven HTA System

The team built a four-layer automated testing system centered on "standardized use cases, automatic generation, efficient management, and visual debugging."

Layer 1: BDD Use Case Specification

Goal: Eliminate ambiguity and unify standards.

The core value of BDD is transforming vague business requirements into precise, executable test steps.

Comparison: Traditional vs. BDD

❌ Traditional Use Case:
"Enter the ticket filling page, select 1 adult and 1 child, and the crowd title is expected to be displayed."
(Vague steps, undefined operation entries, unclear text verification).

✅ BDD Use Case:
"Click [Add number 01] under [SKU module] (add one adult) → click [Add number 02] under [SKU module] (add one child) → expect [Passenger module] to display 'One adult ticket needs to be selected'."

By using a structure of "Module Positioning + Operation Description + Expected Results," understanding bias is eliminated. The team defined 8 core BDD syntax types covering all scenarios, ensuring use cases can be parsed by machines.

Layer 2: HTA Automatic Generation

Goal: From "manual coding" to "use-case-to-code" generation.

This is the core engine of efficiency. The team achieved zero manual writing of test code through a three-step process:

  1. Environment Preparation: Configure the Jest environment and hta.config.js to specify project types, page entrances, and templates (e.g., specific templates for "Attraction Details" with mock data injection).

  2. Use Case Analysis: A parsing engine converts BDD natural language into machine instructions.

    • Input: "Click [Select/Add] under [Pedestrian Module]"

    • Parsed: "Locate element with testID 'Pedestrian Module' and trigger click event."

  3. Code Generation: Combining templates and parsed results to generate executable Jest code.

Example Generated Code:

code JavaScript

downloadcontent_copy

expand_less

it("traveler selection logic", async () => {
    // Simulate click to increase adult
    await expectAttrsExistsAsync(screen, "SKU module", {
        children: ["Add copies 01"]
    });
    fireEvent.click(addAdultBtn);

    // Verify expected results
    expectAttrsExistsAsync(screen, "Passenger module", {
        children: ["An adult ticket needs to be selected"]
    });
});

Result: Writing efficiency improved from 2-3 items/hour to minute-level generation.

Layer 3: Mock Data Management

Goal: From "dispersed storage" to "platform control."

Traditional local storage of Mock data was dispersed and hard to reuse. The team introduced a Mock Use Case Platform:

  • Persistent Storage: Git-based storage supporting multi-person collaboration via a visual interface.

  • Real-time Preview: Testers can append mockId=xxx to page links to verify data validity before testing.

  • NPM Package Release: Data is automatically packaged into NPM modules via pipelines, creating a closed loop of "Update → Release → Call," increasing HTA run speed by 30%.

Layer 4: Visual Debugging

Goal: Lowering the threshold and accurate troubleshooting.

To solve the difficulty of debugging in headless environments, the "Test Case Platform" provides:

  1. Step Result Mapping: One-to-one mapping between HTA results and BDD steps (Green = Success, Red = Failure).

  2. Code Preview: Click a step to view the generated code.

  3. Precise Error Prompts: Failures display specific reasons (e.g., "Text 'xxx' not found under current testID"), helping distinguish between description errors and business bugs.

Implementation Results: A Triple Breakthrough

Data from the "Attraction Details" business scenario demonstrates the transformation's value:

1. Quality: Greatly Improved

  • Code Line Coverage: Increased from 71% → 91%.

  • Branch Coverage: Increased from 60% → 74%.

  • Risk Reduction: Potential online risks reduced by 60%.

  • Optimization: While steps increased (823 to 1852) for better granularity, the number of use case files dropped (500 to 214), reducing maintenance costs by 40%.

2. Efficiency: Significantly Accelerated

  • Project Cycle: Reduced from 15.5 days to 14 days.

  • Release Efficiency: Increased by 10%.

  • Manpower Structure: The Developer-to-Tester ratio optimized from 4.4:1 to 7:1. Testing manpower requirements dropped by 30%, freeing resources for core business development.

3. Cost: Effectively Controlled

  • Test Cost Ratio: Dropped from 25% to 5%.

  • Example: For a 10-person-day project, testing cost dropped from 2.5 days to 0.5 days.

  • Maintenance: Mock data reuse rate increased by 50%.

Additionally, an "Automated Quality Access Control" was implemented in GitLab CI/CD, blocking merges if coverage falls below standards (Line >90%, Branch >70%).

Future Plan: Toward "Full-Link Automation"

While current pain points are resolved, the team is exploring deeper automation:

  • Mock Data Auto-Collection: Integrating with user behavior recording platforms to automatically generate Mock use cases from logs, eliminating manual data entry.

  • Pipeline Standardization: Integrating visual debugging into standard pipelines to support batch running and remove local environment dependencies.

Conclusion

The Trip.com BDD-driven HTA practice demonstrates that automated testing transformation is not just "tech stacking," but a system reconstruction centered on "Business Behavior."

By using BDD to standardize requirements and automation tools to close the loop (Use Case → Code → Test → Verification), the team achieved the ultimate goal of "Guaranteed Quality, Improved Efficiency, and Controlled Costs." This shift from "Human-Driven" to "Behavior-Driven" paves the way for moving from passive testing to active quality assurance.

Source: TesterHome Community

Latest Posts
1Top Performance Bottleneck Solutions: A Senior Engineer’s Guide Learn how to identify and resolve critical performance bottlenecks in CPU, Memory, I/O, and Databases. A veteran engineer shares real-world case studies and proven optimization strategies to boost your system scalability.
2Comprehensive Guide to LLM Performance Testing and Inference Acceleration Learn how to perform professional performance testing on Large Language Models (LLM). This guide covers Token calculation, TTFT, QPM, and advanced acceleration strategies like P/D separation and KV Cache optimization.
3Mastering Large Model Development from Scratch: Beyond the AI "Black Box" Stop being a mere AI "API caller." Learn how to build a Large Language Model (LLM) from scratch. This guide covers the 4-step training process, RAG vs. Fine-tuning strategies, and how to master the AI "black box" to regain freedom of choice in the generative AI era.
4Interface Testing | Is High Automation Coverage Becoming a Strategic Burden? Is your automated testing draining efficiency? Learn why chasing "automation coverage" leads to a maintenance trap and how to build a value-oriented interface testing strategy.
5Introducing an LLMOps Build Example: From Application Creation to Testing and Deployment Explore a comprehensive LLMOps build example from LINE Plus. Learn to manage the LLM lifecycle: from RAG and data validation to prompt engineering with LangFlow and Kubernetes.