How to Do Performance Testing: Part 1 – Implementing Performance Testing: Setup and Starting Testing

Welcome to the first post in a five part series on how to do performance testing. These posts will focus on various aspects of performance testing including implementing performance testing, testing in the cloud and more. You will find link to the next part of the series on the bottom of the page.

————————————————————————————————————————————————

We would like to share with you how we went about implementing performance testing at RES. In hindsight, it was pretty reactive – we took what we could get our hands on, and made choices depending on what requests arrived. We hope you’ll learn from our experiences and become encouraged. It started back in 2010:

The trigger was that we wanted to know what the impact and footprint would be running our products on a 64-bit operating system, having been developed earlier on 32-bit.

We started performance testing as follows: Take a network switch to make sure the systems we measure are isolated from the corporate network, and recycle a set of 3-5 year old server type PCs from the IT department. We put them all in a 19” rack, and fortunately the IT department was so kind to provide space, power and cooling. Then our Windows performance test rig looked like this:

  • Three smaller machines that could initiate RDP sessions to a Remote Desktop Server,
  • One larger test machine was purchased to serve as a Remote Desktop Server,
  • Two intermediate test machines to host services,
  • And a larger fast machine to control it all: It has Active Directory, DNS, and a File Server installed on it.

We used our own automation product to script and schedule tests and for housekeeping. At that time, we did not want to use hypervisors, because the software we needed to test, comes with device drivers and by then, we did not feel comfortable at all to performance test in a virtualised environment.

Sometimes you create tools and services that really pay back and you keep in use forever. Other times, a lot of work gets covered with dust. Here’s an example:

It’s good practice to run a test on a clean system. But if you don’t have virtualisation, then you don’t have snapshots. So, as a replacement, on the test machines we installed small server OSes on a small partition. In these small OS partitions, we installed agents to our automation product. Then we made images of the OSes that could be installed in the larger partition on these test machines, and archived them on the file server.
Prior to running each test, the required operating system image would then be downloaded by the automation agent from the file server into a second partition on the test system, and subsequently the system would be rebooted and started up under the desired OS.  This worked fine, however there are couple of drawbacks here:

Our test systems were recycled hardware, certainly not identical. This meant that a captured image on one test system could not be used on another system (well it would load, but then crash halfway during start up due to other hardware and addresses etc.).

As with all snapshots, you want operating system images to be in sync with upgrades. We did not come to automate this, and because we did not have separate development and production systems, this did not happen.

This snapshot implementation of downloading images to a partition, prior to running a test, considerably slowed down the test process. Then we postponed loading snapshots to once a day, and in the longer run we even stopped doing this at all. As a poor man’s workaround, we obsoleted our snapshot feature completely by adding a product uninstall prior to a new install (despite the risks).

Lessons learned:

  • Have separate development and production test systems, so that while doing the work there is time to maintain and improve.
  • Automate housekeeping, to minimise time spent on maintenance.

Apart from losing our snapshot functionality, we still managed to performance test and help improve the product. Until a server component was born in 2012. This component required us testing with up to 1000 sessions, way too much for the 3 machines in use to start RDP sessions. More on that in our next post.

Read Part 2:  Virtual Machines, Performance testing plan and Lessons Learned Here

About the Author

Bart

Grew up in packet radio X.25, got a job with Ethernet on yellow cables, then became interested in PCs and internet, got involved with Windows network card device drivers, java and network management systems, and now I got lost in Azure and performance test (automation).
Find out more about @bwithaar