As digital transformation accelerates, User Experience (UX) has evolved into a core competitive advantage for software products. To ensure high-quality delivery, UX testing is no longer optional—it is a critical stage in the Software Testing Life Cycle (STLC).
Traditionally, QA focused on functional, performance, and security testing, prioritizing the closure of "Fatal" and "Critical" defects. However, without explicit UX standards, user satisfaction can remain low. To bridge this gap, the Chengdu R&D Testing Team has developed a robust methodology to quantify and improve system usability.
According to the ISO 9241-210 standard, User Experience is defined as a person's perceptions and responses resulting from the use and/or anticipated use of a product or system.
It is a holistic combination of:
Product Factors: Brand image, UI/UX design, performance, and accessibility.
User Factors: Skills, prior experience, and personal attitudes.
Context of Use: The specific environment where the interaction occurs.
Usability testing focuses on executing test cases derived from a Usability Metrics Framework. By comparing the final product with initial prototypes, testers can pinpoint discrepancies in the user journey.
Our framework utilizes a three-tier hierarchy:
Level 1 (Dimensions): Understandability, Operability, Learnability, and Interface Friendliness.
Level 2 (Indicators): 17 indicators including Compatibility and Consistency.
Level 3 (Criteria): 88 granular checkpoints (e.g., "Visual hierarchy consistency" and "Font scaling logic").
This method involves UX experts and interaction designers auditing the system based on Nielsen’s 10 Usability Heuristics. Unlike standard QA, this requires no formal test cases; experts rely on their professional intuition and the Expert Testing Principles to identify friction points directly.
Simulated user testing involves recruiting representative users to perform specific tasks. We monitor:
Task Success Rate: Can users complete the goal?
Time-on-Task: How efficient is the workflow?
Subjective Satisfaction: Qualitative feedback on the emotional experience.
We applied these methodologies across three major platforms: Task Management, Community Systems, and Cloud Notes.
Key Findings from the Task Management System:
Total Issues Identified: 443 UX defects.
Breakdown: 358 from Usability Testing, 33 from Expert Review, and 52 from Simulated User Testing.
Optimization Example: An "Import Template" button was redesigned after testing showed users found its label ambiguous and its placement inconspicuous. Post-fix, the feature saw a significant increase in user adoption.
UX testing is most effective when integrated early in the Product Development Life Cycle (PDLC):
Usability Testing should be the baseline for any design-compliant system.
Expert Reviews and Simulated User Testing provide deeper insights when design specs are incomplete or when comparing multiple design prototypes.
Final Thought: Testing is only one part of the puzzle. Real UX excellence requires seamless collaboration between Business, Design, Development, and QA teams to build a truly User-Centered Design (UCD) ecosystem.