Probench's Performance Testing
  • 02 Sep 2022
  • 3 Minutes to read
  • PDF

Probench's Performance Testing

  • PDF

Introduction

This February we decided to test the performance of Probench. We started this task with a few questions in mind:

  • What is the maximum number of concurrent users that the system is comfortable with?
  • What is the threshold of Probench after which it would affect the performance or might start giving errors?
  • What are the causes of these errors that we are getting after we hit the performance threshold?

The tool that we decided to use to conduct this test was Apache JMeter.

Example of JMeter Test
[Example of JMeter Test]

​​

The terminology used in JMeter test reports

#Samples - Number of times the page has been hit
Average - Average time in milliseconds for the page to open
Min - Minimum time in milliseconds for the page to open
Max - Maximum time in milliseconds for the page to open
Std Dev(Standard Deviation) - Measure which shows how much variation from the average(measured in milliseconds)
Throughput - number of requests handled per second

Achieving 0% error

The first few tests were designed with time and the number of users as primary parameters. We selected a small survey page with 32 questions, and the objective was to load the page successfully. We started by running the tests for 10 - 50 users in incremental steps for 2 minutes, trying to get 0% errors. We tried various experiments to get the error percentage to 0 like bypassing the login.

image.png
[500 users for 30 mins]

Unable to achieve the desired results from this test, we shifted our focus to getting 0% errors first, irrespective of the time taken.

After getting errors even for the above We made changes to the system to finally achieve 0% error.

Code changes to fix performances issue

  1. Added concurrency while saving and loading the survey pages
  2. Handled few unmanaged exceptions

image.png
[340 users for 3 seconds]

Running the test for multiple users

Utilising the changes that we made to achieve 0% error for 1 second, we went back to our original plan of running the tests for 2 - 30 mins. Although we had achieved 0% for 1-3 seconds, we still could not get 0% when the time and number of users were increased.

image.png

The breakthrough was achieved when we signed out the user for every test cycle. This helped us finally get 0 errors for all tests that had previously failed.

image.png
[Yellow highlighted cells signify that the test has passed after the breakthrough]


image.png
[Test result for 2000 users run over 5 mins]


image.png
[Test result for 4000 users run over 21 mins]


image.png
[Test result for 4000 users run over 55 mins]

Long page tests

After completing the tests for a relatively small page, we moved to load a long page with 88 questions. We completed this cycle of tests relatively easily by leveraging the changes made in the previous cycle. You can see some of the test results below.

image.png
[Test result for 2000 users run over 5 mins]


image.png
[Test result for 4000 users run over 34 mins]


image.png
[Test result for 4000 users run over 55 mins]

Page save test

After successful rendering pages for up to 6500 users concurrently, we moved on to testing the page save functionality. Similar to the page render tests we tested this functionality on the same 2 pages(a small page with 32 questions and a long one with 88 questions). We were again successful in running these tests. You can find some of the test results below.

image.png
[Test result for 2000 users run over 8 mins(short page)]


image.png
[Test result for 3000 users run over 21 mins(short page)]


image.png
[Test result for 6500 users run over 55 mins(short page)]


image.png
[Test result for 3500 users run over 55 mins(long page)]


image.png
[Test result for 6500 users run over 55 mins(long page)]

Summary

We have completed a round of stress testing. We have run 514 cases. The above link has the details. Only one has failed with a 0.01% error.


Was this article helpful?