Customer Cases
Pricing

Performance Testing Handbook: Key Concepts & JMeter Best Practices

A complete guide to performance testing key concepts (concurrent users, QPS, JMeter threads), async/sync task testing, JMeter best practices, and exit criteria—helping B2B QA teams avoid pitfalls and align tests with customer requirements.

Foreword

I currently lead test management at a B2B company, where most team members are mid-level QA engineers who have not yet built systematic testing methodologies. Early in my tenure, during performance test plan reviews, I noticed a critical gap: my team can execute performance tests proficiently—they write JMeter and Python scripts with ease—but communication and execution break down when it comes to core performance testing concepts. This ambiguity leads to wasted engineering effort, misaligned test results, and eroded customer trust.

Common pain points we faced include:

  • Confusion between concurrent users, online users, QPS, and JMeter threads: If a contract requires “supporting 50 concurrent users”, is setting JMeter threads to 50 a valid simulation?

  • Misunderstanding “concurrency” for asynchronous tasks: Does testing only the async trigger endpoint (with no errors) mean the entire async flow meets performance requirements?

These issues are not unique to my team. Many QA professionals struggle with these concepts due to their inherent complexity, and teams often lack unified terminology. This guide is designed to clarify these error-prone concepts—it does not teach basic performance testing or tool usage from scratch. Instead, it serves as a go-to reference for teams to communicate using the same conceptual framework, ensuring performance tests are credible, consistent, and actionable.

1. Define Clear Performance Test Objectives

For most B2B and B2C projects, performance testing focuses on three core goals: Fixed-Load Testing, Capacity Stress Testing, and Endurance Testing. Choosing the right objective is critical to aligning test results with business and customer requirements.

1.1 Fixed-Load Testing

Fixed-load testing involves testing under stable, predefined load to quantitatively measure response time and throughput. It is used to verify whether the system meets contractual or SLA performance targets under expected business pressure—making it ideal for validating compliance with customer requirements.

Fixed-load testing is divided into two types:

  • Pulse fixed-load testing: Short duration (seconds or ≤ 5 minutes), used to simulate sudden traffic spikes.

  • Non-pulse fixed-load testing: Medium duration (10–60 minutes), used to simulate steady, sustained traffic.

1.2 Capacity Stress Testing (Load Testing)

Unlike fixed-load testing (which validates behavior under known load), the goal of capacity stress testing is to find the system’s maximum pressure limit. This helps identify bottlenecks and guide optimization efforts.

How to execute capacity stress testing:

  1. Start with low load and gradually increase it in stages.

  2. Stop when throughput stops scaling linearly or degrades sharply (this inflection point is the system’s bottleneck).

  3. Run each load stage for 10–20 minutes to ensure stable metrics.

1.3 Endurance Testing (Long-Run Testing)

Endurance testing simulates sustained real-world load over an extended period to detect slow, chronic issues that short-term tests miss. These issues include memory leaks, abnormal resource recycling, connection pool exhaustion, and thread pool degradation.

Best practices for endurance testing:

  • Recommended duration: 6–12 hours (long enough to expose chronic issues).

  • Continuously monitor key metrics: CPU usage, memory usage, GC frequency, and connection/thread pool stability.

General Recommendations for Test Objectives

  • Migration/refactoring projects: Use fixed-load testing to compare performance before and after changes.

  • Projects with clear performance goals: Use fixed-load or capacity stress testing. Fixed-load testing is faster, while capacity stress testing is better for identifying bottlenecks.

  • Projects without clear goals: Use capacity stress testing and report only factual data (no pass/fail judgment).

  • All projects: Include endurance testing, with lower priority than fixed or stress testing based on project schedule and risk.

2. Classify System Task Types: Synchronous vs. Asynchronous

A common mistake in performance testing is applying the same testing patterns to synchronous and asynchronous tasks. Understanding the difference is critical to creating realistic test scenarios and accurate results.

2.1 Synchronous Interfaces

Synchronous interfaces require the client to send a request, wait for processing to complete, and receive a full response before proceeding. The user receives a response immediately, indicating the task is complete.

Examples of synchronous interfaces:

  • Chat APIs and customer support interfaces

  • Recommendation and search APIs

  • Form and questionnaire submission interfaces

2.2 Asynchronous Tasks

Asynchronous tasks work differently: the system immediately accepts the task, returns a task ID to confirm receipt, and processes the work in the background. The client must verify task completion via polling, callbacks, logs, or database queries.

Examples of asynchronous tasks:

  • Offline data computing and batch processing

  • Offline labeling and machine learning inference tasks

  • Bulk export and report generation tasks

Critical Mistake: Testing Async Tasks Like Sync Interfaces

Many QA engineers apply synchronous testing logic to asynchronous flows, resulting in unrealistic load and incorrect test conclusions. Here’s a real-world example:

Requirement: Support 10 concurrent asynchronous tasks, with an average completion time of ≤ 10 minutes.

Mistake: Hitting the async trigger endpoint at 10 requests/sec for 5 minutes → total tasks = 3000 (far exceeding the system’s design capacity) → false “test failure”.

Correct Approach: Trigger 10 tasks at once, wait for them to complete, then trigger another 10—ensuring no more than 10 tasks are processed concurrently at any time.

Key Differences in Testing Sync vs. Async Tasks

  • Concurrency control: Async testing uses one-time batch triggering (not continuous requests) to maintain a fixed number of background tasks.

  • Metric collection: Tool-reported response time and success rate only reflect “task submission” status—not the actual task processing time or success rate. Real metrics must be collected from UI logs, databases, or task status APIs.

3. Build a Comprehensive Performance Test Model

A valid performance test plan requires modeling four key dimensions to ensure test results reflect real-world conditions. These dimensions are traffic model, business model, data volume model, and cache model—all critical for Google’s search crawlers to understand the depth and relevance of your content.

3.1 Clarify the Traffic Model (Concurrent Users, QPS, Threads)

One of the most common sources of confusion in performance testing is mixing up traffic-related terms. Customers often use “online users” or “concurrent users” in requirements, while testers report “threads” or “QPS” in results—leading to misalignment. Below are clear definitions to resolve this:

Core Traffic Terms Defined

  • Online users: Logged-in users who are active on the platform but not necessarily interacting. Does not directly equal system pressure (e.g., a user logged in but idle creates no load).

  • Concurrent users: Online users who are actively interacting with the system during a specific period. This is the most relevant metric for B2B performance requirements.

  • QPS / RPS: Server-side requests received per second (HTTP or RPC). Directly measures system load and is ideal for ToC systems with high real-time traffic.

  • TPS: Transactions completed per second (a transaction = a logical business flow, e.g., “query balance → transfer → verify balance”). Common in complex B2B workflows.

  • JMeter threads: Virtual users in testing tools (BIO model), where each thread waits for a response before sending the next request. Threads ≠ QPS.

Practical Example

3 online users, 2 concurrent users:

  • User A: 2 requests/sec → completes 1 transaction per second.

  • User B: 1 request/sec → completes half a transaction per second.

  • Result: Concurrent users = 2, QPS = 3, TPS = 1.5.

Industry Best Practices for Traffic Metrics

  • ToC systems: Use QPS/RPS (high traffic, real-time requirements, micro-service architecture).

  • ToB systems: Use concurrent users or TPS (business-focused, easier for customers to understand).

  • Customer alignment: If a customer uses “online users” in requirements, clarify the difference and translate it to concurrent users or TPS to ensure test goals are realistic.

3.2 Clarify the Business Model (Traffic Ratios)

The business model defines the ratio of different business flows under the same traffic. Even the same QPS can create drastically different system pressure depending on the business mix—making this a critical component of accurate performance testing.

Why Business Model Matters

Example: A chatbot API with a target QPS of 100. Different query types hit different algorithms, leading to varying resource consumption:

  • Balanced query mix: 137X CPU usage per second.

  • 60% single-intent, 30% multi-intent, 10% casual chat: 250X CPU usage per second.

How to Define the Business Model

  • Existing systems: Analyze 7 days of real production logs to identify traffic ratios by interface, intent, and request parameters. Use Linux Shell scripts for quick analysis (no complex coding required).

  • New systems: Agree on an initial ratio with product managers, customers, or stakeholders. Use this as the baseline for testing if no objections are raised.

3.3 Clarify the Data Volume Model

For most database-reliant systems, performance is highly sensitive to data size. Performance testing must use data volumes that reflect real-world production conditions to ensure accurate results.

  • New projects: Agree on the expected production data scale with customers and use this as the testing baseline.

  • Migration projects: Use production-equivalent data in the new environment (no extra data construction needed).

  • Limited environments: If simulating production data is too costly, focus on performance regression between versions (same data size for before/after comparisons).

3.4 Clarify the Cache Model

Over-reliance on cached data during testing can underestimate real server load. To avoid this, design test data to reflect real-world cache hit rates.

  • Prepare enough unique test data (≥ QPS × test duration) to ensure natural cache behavior.

  • Clear the cache or use new test data between test runs to avoid skewed results.

4. Define Clear Performance Exit Criteria

Performance exit criteria are the benchmarks that determine whether a system meets requirements. These criteria must be clear, measurable, and aligned with customer expectations—critical for avoiding post-test disputes.

4.1 Throughput

System throughput (QPS/TPS) must match the target defined in the traffic model. Any deviation indicates a performance gap that needs addressing.

4.2 Response Time

  • Existing systems: No regression compared to peak production response times.

  • Customer-specified requirements: Follow the customer’s defined thresholds.

  • No explicit requirements: Default to 95th percentile response time ≤ 1s for synchronous interfaces; report metrics only (no hard thresholds) for asynchronous tasks.

Response Time Definition by Task Type

  • Synchronous HTTP/RPC interfaces: Response time = time from request sent to full response received.

  • Streaming interfaces (e.g., Agent类): Response time = time from request sent to first byte received.

  • Asynchronous tasks: Response time = task end time − task start time (queue time optional, based on requirements).

4.3 Error Rate

  • Follow customer or stakeholder requirements.

  • Default threshold: Error rate ≤ 1% (for most B2B systems).

4.4 Resource Utilization

  • Follow customer or stakeholder requirements.

  • Default thresholds: CPU ≤ 80%, Memory ≤ 80% (to avoid resource exhaustion).

Extra Checks for Endurance Testing

  • No continuous memory growth (indicates potential memory leaks).

  • After GC, memory usage returns to near pre-test levels (minimal residual growth).

  • Connection/thread pool sizes remain stable (no exhaustion).

  • Disk space growth is within expected limits (avoid log overflow).

⚠️ Critical Anti-Pattern: The Water Tank Model

A dangerous illusion in performance testing: 0 error rate and normal average response time, but the system cannot sustain the target QPS over time. This happens due to the system’s “water tank” (queue/buffer) capacity.

Example: System throughput = 5 QPS, queue capacity = 50, test input = 10 QPS for 3 seconds. The queue does not fill in 3 seconds, so metrics look good. But if the test runs for 10 seconds, the queue overflows, errors spike, and the system bottleneck is exposed.

Conclusion: For systems with queuing or buffering mechanisms, short tests are misleading. Always run sufficiently long sustained pressure tests to validate real-world performance.

5. Controlling Traffic & Business Models in JMeter (Step-by-Step)

JMeter is one of the most popular performance testing tools, but many QA engineers misuse it by equating threads to QPS. Below are proven methods to control traffic and business models in JMeter, aligned with real-world testing best practices.

5.1 How to Control QPS in JMeter

JMeter threads use a BIO model: each thread waits for a response before sending the next request. The relationship between threads, QPS, and average response time is defined by this formula:

QPS ≈ Threads × (1000 ms / Average Response Time (ms))

Method 1: Constant Throughput Timer (Manual Control)

  1. Use 1 thread to run a 10-second test and get the average response time.

  2. Estimate required threads using the formula: Threads ≈ (QPS × Average Response Time) / 1000.

  3. Add a Constant Throughput Timer to limit the QPS upper limit (no lower limit).

  4. Note: Target throughput is in QPM (requests per minute) → multiply target QPS by 60. Select “Calculate Throughput based on: All active threads”.

Method 2: Thread Group Only (Simplified Manual Control)

If you have a clear understanding of the system’s performance, you can control QPS by adjusting the thread count directly (no timers needed). This requires familiarity with the thread-to-QPS mapping for your system.

Method 3: Ultimate Thread Group + Throughput Shaping Timer (Auto Control, Requires Plugin)

  1. Set the target QPS curve in the Throughput Shaping Timer (name the component “timer”).

  2. In the Ultimate Thread Group, use the dynamic feedback function: ${__tstFeedback(timer,1,500,30)} to adjust threads automatically and follow the QPS curve.

5.2 How to Control Business Model Ratios in JMeter

Two effective methods to control business flow ratios, depending on your test needs:

Method 1: Random Variable + If Controller

  1. Use a Random Variable to generate a uniform random number (1–100, e.g., variable name “prob”).

  2. Add three If Controllers with conditions to achieve the desired ratio (e.g., 60%/30%/10%):

    1. 60%: ${__jexl3(${prob}>=1 && ${prob}<=60,)}

    2. 30%: ${__jexl3(${prob}>=61 && ${prob}<=90,)}

    3. 10%: ${__jexl3(${prob}>=91 && ${prob}<=100,)}

Method 2: CSV Data Set Config

For more precise control, create a CSV file with 100 rows (matching the desired ratio: e.g., 60 rows for scenario 1, 30 for scenario 2, 10 for scenario 3). Use CSV Data Set Config to loop through the file, naturally achieving the desired business ratio.

6. Performance Testing Execution Best Practices

Even the best test plan can fail if execution is not carefully managed. Follow these best practices to ensure accurate results and avoid costly mistakes:

  1. Notify developers: Sync with the development team before testing to facilitate issue troubleshooting.

  2. Monitor both sides: Track metrics for both the system under test (SUT) and the pressure generator. If the generator’s resources are saturated, the bottleneck may be in the testing tool—not the system.

  3. Verify the environment: Double-check IPs, domains, and test identifiers to avoid accidentally testing production systems (a critical mistake that can cause downtime).

  4. Avoid peak conflicts: Do not run performance tests during integration testing, regression testing, or other high-load activities to prevent interference.

Final Thoughts

Performance testing success depends on clarity—clear concepts, clear objectives, and clear alignment with business and customer requirements. This guide is designed to resolve the most common ambiguities that plague QA teams, especially in B2B environments. By following these practices, you can ensure your performance tests are credible, actionable, and aligned with real-world needs.

For QA managers and engineers looking to refine their performance testing process, this guide serves as a reference to standardize terminology, avoid common pitfalls, and deliver results that customers trust. Remember: performance testing is not just about running scripts—it’s about validating that the system can deliver value under real-world conditions.

Latest Posts
1Performance Testing Handbook: Key Concepts & JMeter Best Practices A complete guide to performance testing key concepts (concurrent users, QPS, JMeter threads), async/sync task testing, JMeter best practices, and exit criteria—helping B2B QA teams avoid pitfalls and align tests with customer requirements.
2The Future of Software Testing in the AI Era: Trends, Challenges & Practical Strategies Explore the future of software testing in the AI era—key challenges, trends in testing AI systems, how AI empowers traditional testing, and practical strategies for testers to thrive. Learn how to adapt without rushing or waiting.
3Practice of Large Model Technology in Financial Customer Service Discover how large model fine-tuning transforms financial customer service at China Everbright Bank. Explore 3 application paradigms, technical architecture, and achieve 80% ticket summary accuracy with AI.
4Application of Automated Testing in Banking Data Unloading Testing: A Complete Guide A complete guide to automated testing in banking data unloading. Learn GUT implementation, FLG/DAT parsing, and case studies for accurate cross-system data verification.
5Performance Test Scenario Design Methodology: A Comprehensive Guide Learn how to design effective performance test scenarios with 4 core frameworks (Baseline, Capacity, Stability, Exception). A step-by-step guide for performance test engineers in 2026.