Source: TesterHome Community

I’m writing this article to share what I consider to be valuable insights, in the hope of helping new colleagues who have just started testing, as well as experienced testers.
I hope that our new friends and those about to embark on a career in testing can understand the underlying logic of testing – that is, the points that may be hidden from view in our daily work, not just the obvious tasks like writing test cases, filing bugs, developing automation, or building platforms.
As the saying goes: outsiders watch the spectacle, while insiders know the craft.
I believe testers should not become mere conveyors of Product Requirements Documents (PRDs), and senior test engineers should not just be developers of testing tools.
For testers, a solid grasp of fundamental testing theory is a must, and development and coding skills are equally indispensable. Lacking either makes it difficult to become an outstanding tester.
Many testers in the past chose this path because they disliked coding. However, in the future – or even now – a tester who doesn’t understand code will struggle to excel.
Conversely, someone who only knows code but lacks a grounding in testing theory – such as test analysis, case design, and testing strategies, or who knows a little but rarely applies them in practice – is certainly not a qualified tester.
Let me walk you through some of the underlying logic of testing – the insider knowledge.
These three core competencies are widely acknowledged and relatively stable. However, the analysis data shows that market demands for testers continuously evolve with emerging technologies.
The three core competencies remain essential and will consistently hold a central position.
Since the days of QTP over a decade ago, automation testing has been a goal for testers.
Today, the landscape is filled with diverse automation technologies and frameworks. Market expectations of testers are higher than ever. Testers are now expected not only to write automated test cases, but also to develop and maintain automation framework platforms.
Pure black-box testers have either already upgraded their skills or are on that journey. Those relying entirely on black-box testing are becoming increasingly rare.
If you can’t write automated test cases or understand programming languages, getting your resume past the initial screening is probably a challenge.
However, every coin has two sides. As testers become more proficient in coding, foundational testing skills risk being neglected. The professional expertise within the testing field gradually fades.
Just as a boat sailing against the current must forge ahead or be swept back:
The three core competencies should advance in parallel, without favoring one over the other.
Having participated in numerous recruitment interviews in my department, my observation is that many testers, despite years of experience, do not have a good command of test case design methods and strategies.
At least 60% of people don’t use any specific design methods for test cases, nor do they think about test analysis and design. Most are merely executors of functional tests, with little thought given to test design.
Few write test plans, and test cases are often just a breakdown of the PRD. In short, testers can all too easily become conveyors of the PRD.
As a veteran in the field, I still hope for the healthy development of the testing profession. I wish for testing professionalism to keep pace with the times, even as we acquire new skills.
After all, quality assurance is the very foundation of a tester’s role.
It treats the program as a black box, examining whether the program functions correctly according to the PRD without considering its internal structure.
Specifically, it checks whether the program appropriately accepts input data and produces correct output.
This is the definition of black-box testing, and it is also its underlying logic.
Many people overlook definitions, but definitions often reveal fundamental truths.
In our work, many people get accustomed to testing a certain type of system, only to switch to a new business domain and suddenly feel lost, assuming an adaptation period is always needed.
The principle remains the same: once you grasp the underlying logic of black-box testing, you can get up to speed quickly without an extended adjustment period.
Most of our testing is black-box. Therefore, regardless of the system type:
Our testing strategy is always to:
“Examine whether the program functions correctly according to the PRD, and whether it appropriately accepts input data and produces correct output.”
Your testing basis is the PRD. You must know the PRD inside out, then analyze its inputs and outputs.
Covering these aspects will get you to about 80% – meaning you’ll be well on your way to successfully delivering the project.
Finally, let me reiterate, as I’m concerned my point might not be clear – because everything above is common knowledge.
From day one of learning testing, everyone understands black-box testing, inputs, and outputs.
But truth often resides in simplicity. Remember its definition!
When you encounter a project and don’t know where to start testing:
Take that definition, read it three times carefully, and you will surely find the answer.
Pure black-box scenarios are actually rare in practice. Apart from understanding inputs and outputs, you must also grasp the intermediate processing logic. This will be even more helpful.
More importantly: study the PRD thoroughly. Analyze its content meticulously, leaving no paragraph or word unchecked.
PRDs and design documents often contain numerous loopholes waiting to be uncovered.
Here, ‘input’ is not just simple input fields on a UI.
Anything that can trigger system execution qualifies as input.
Based on architectural layers, inputs can be categorized as follows.
Positive operations – single actions:
Complex operations:
Reverse operations:
Critical emphasis:
Common system feedback and user‑observable changes:
Invisible outputs include:
Why this matters:
While visible outputs help verify 90%+ of functionality, many fields are not displayed. Some are used only by downstream systems; others are reserved for future use.
These invisible parts frequently cause system anomalies and represent the biggest hidden risks.
Therefore, testing should not only be done from the user’s perspective, but also from the designer’s perspective, and more importantly, from the perspective of the entire product.
Speaking of test analysis and design, I believe this is the most core competency of a tester.
The black-box testing and input‑output model discussed above are methods specific to functional testing. While functional testing accounts for roughly 80–90% of typical system testing, it is not the whole picture.
Moreover, it is just one phase and type of testing. To excel at testing, test analysis and design are indispensable.
Given a project, how do you approach testing it? How do you ensure quality?
Many interviewees answer: analyze requirements, write test cases, execute testing, issue a report.
That is merely the testing process – it outlines the sequence but does not guide a tester on how to test, how to perform test analysis, and certainly does not ensure system quality.
You can use the 5W2H method for analysis. But as a test architect, you don’t need all of them.
2W + 1H is sufficient.
These three questions are the most important, yet often overlooked because they are replaced by experience.
When encountering a different system or business domain, people feel lost. This is where 2W1H analysis helps.
Why – Why are we doing this project? What is its background?
What – What do we need to test in this project? What is the scope?
How – How do we test this project? What strategies and methods?
When – What is the expected completion timeline?
Who – What resources can be called upon?
Where – Is a centrally‑located, closed‑door effort needed? Should dev and test teams sit together?
How Much – What is the cost? How many people? What server resources?
The underlying logic of test analysis and design is answering the three 2W1H questions effectively.
These are still methodologies. The detailed operational steps deserve their own article.
In the meantime, I encourage you to learn about the Heuristic Test Strategy Model (HTSM), which perfectly complements the 2W+1H approach.
As noted, soft skills like communication are considered one of the three core competencies for excellent testers, with over 90% agreement.
Here are my summary insights.
In the e‑commerce domain, speed and change are the norms. Some projects demand rapid launch. Due to time constraints, the PRD may have loopholes or overlooked logic.
What should testers do? Communicate.
Without communication, pitfalls are easily missed.
Communication must be proactive – reach out to product, development, and other test teams.
Treat yourself as the owner. The quality of the project will be safeguarded.
An employee who acts like the owner is the one the boss trusts most.
The fundamental basis for system testing is the requirement specification.
As testers, we are the last line of defense.
Testers must perform their own analysis, not simply follow the developer’s line of thought.
What developers tell you has already been processed and may deviate from the original requirements. This is precisely the value of testing.
Even if 99% of what they say is correct, it should only serve as reference material. You must analyze based on the requirements themselves.
Requirements are the basis for testing, but they can also be wrong.
Testing is about questioning everything – every process, every detail.
When reading requirements:
Use quality standards, test strategies, and experience to fuel your skepticism.
Think thrice for every feature.
The larger the company, the more departments, the more complex the systems. Greater interdependence leads to higher failure probability.
More boundaries mean higher communication costs. If any details are missed, not considered, or assumed to be someone else’s responsibility, pitfalls emerge.
These are often discovered during integration testing, but they can severely impact schedule, causing rework and delays.
To uncover these pitfalls during testing, a lead tester is helpful. But:
The lead shouldn’t be just one person. All testers should act as the lead.
As the final quality gatekeepers:
Testing requires seeing the big picture, not just the local view.
Think and test from the company’s position and the user’s perspective.