Friday, March 18, 2016

Practical mathematics

For a brief while, I used to be a mathematician. But believe me, statistics was never a strong suit in my case ! So this article has some very basic thoughts on a way to get more out of performance testing with really very little effort.

I've come up against a few real-world situations where performance test analysis is almost complete and test script development about to begin. The analysis has yielded a breakdown of the client's activities into business processes and given an indication of the average sizes of data objects those processes handle. For example "12 rows on an invoice" or "10 items in an order". You're using a realistic spread of data and you know that by measuring performance for enough transactions, you'll get valid results. Then the client points out that they care quite a lot about the invoices or orders that are much larger than the average. Perhaps they represent more important customers or suppliers. And checking your figures, you find out that when you just look at the top 5% of the objects their average is indeed much larger. They could average 50 or 100 items say.

You feel a bit sheepish at this point but you know you're dealing with a fairly skewed distribution, and you're secretly pleased this question was asked before it's too late. So what can you do ?

An approach I've often used during performance test analysis is to divide up the data into two sets - representing the top 10% and representing the remaining 90%. You may have to identify test data that matches the criteria, or perhaps load enough fresh data representing the "normal" and "large" items. Then when you setup your load test you include two copies of the script, one using only "normal" data and one using "large" data. If you're lucky the load test tool will automatically collect performance data for the two sets of test users side by side - if not you may need to use some tricks like changing the transactions names so you can identify results from the "normal" and "large" data.

When you execute load tests you will find some transactions such as opening a line item perform much the same in each script. But you will also see interesting effects - perhaps the "large" script takes 5 times as long (on average) to open 50 items as it does 10 items. You can then go back to your non-functional requirements and assess the likely impact on users. You can also have much more interesting conversations with the designers and developers of the system and see quick ways to make significant performance improvements. You can also use monitoring tools such as CA Wily Introscope to probe into the behaviour of the application during tests. So this is a technique well worth considering.

There will always be situations where the simple approach does not work. For example, you may need to produce a graph showing the variations in transaction response time caused by data size. To be honest I've only done that once in eleven years !

Friday, May 22, 2015

Cloud Testing Again !

Cloud Testing. Well, it's still being talked about, and recently I looked into Microsoft's VSTS features for Cloud Testing.

I found that in VSTS 2013 Ultimate Edition, there is increasing support for running load tests in the Cloud, and Update 4, the most recent I've reviewed, links fairly seamlessly with Microsoft's general Cloud services (also known as Azure). Let's assume you have created a Visual Studio project, recorded web tests and created a load test scenario that uses these web tests. Your test settings file now includes an option to select "Run Tests using Visual Studio Online" rather than running via the local Controller and Agent software built around VSTS. When you run the load test, this automatically selects load test agents from Microsoft's Cloud to run the scripts based on the number of users required.

First you need to create a Visual Studio Online account and login to that before you can run load tests in the Cloud. You can access a dashboard which lists the load tests recently run and navigate into the results. You can also "Download" the results, which saves all the performance counters to your local SQL instance where you can view and analyze load results in VSTS just as usual.

At the moment the location of the load agents is fixed and it corresponds with the location in your Visual Studio Online account. Referring to this FAQ page, you can read about a significant enhancement in VSTS 2013 Update 5 which will allow you to select a Microsoft data centre to run the test from around the world. This means you can see how the actual response times for a web application are affected by the user location. The next enhancement people will be asking for will be a means to configure this in the load test scenario itself. I can see it will speed up testing to be able to have groups of users in the same load test simulated from various locations.

You can find out more about the load test process from Microsoft here.

Monday, September 03, 2012

Cloud Testing and Last Mile Testing in the real world

Cloud Testing. It's being talked about everywhere. But what does it all mean for performance testing in the real world ?

For some while now, there have been broadly two approaches to performance testing:

1) From the Internet. This is particularly suited to public Internet sites because all the components involved are tested - including your Internet connection and firewalls. For this to work effectively, load injection must be done from servers connected to the Internet at points which provide sufficient bandwidth. Broadly speaking, Cloud Testing takes this approach.

2) Within a private network. This is particularly suited for testing internal systems as load can be delivered from various points in your network to gather realistic performance results (and these can be geographically distant points on your network).

Over the years, numerous organisations have hosted remote testing services via the Internet, in some cases supplying a tool geared for customers to use, in other cases providing a complete test service. HP promote their Software-as-a-Service (SaaS) offering based around the well established Performance Center test tool. SOASTA have recently entered the market with their CloudTest web-load testing tool. This allows customers to quickly scale to high loads as the tool comes ready configured for deployment on various commercial Cloud services. Finally fans of JMeter are now gathering around a Cloud-based service called BlazeMeter.

Another test tool which advocates a slightly different approach is Gomez - now part of Compuware. Their philosophy is that Cloud Testing as described above does not provide sufficiently accurate end-to-end response times in the real world. Unless you can configure load injectors on the Internet in very narrowly defined geographic regions, you can't measure response times in those regions and performance will always be uncertain. And guess what - Gomez provides these highly distributed points of presence on the Internet, and has packaged this up as what they call Last Mile Testing. Take a look at their publications on this and see how well they match your performance testing issues. But if you just want to test your firewall, load balancer and so on - you may decide it does not matter too much where you test from.

There are no right or wrong answers for Cloud Testing - or Last Mile Testing. You can't escape from reviewing your business situation and defining specific requirements, assessing the risks if requirements are not met, and the costs of various testing approaches. So proceed with your eyes open - and be prepared to discuss the above areas in some detail along the way. And if you feel you need help along the way - do seek professional advice.

Thursday, May 12, 2011

I mentioned recently I would say more about Agile performance testing, including phasing approaches.

Traditional approaches often leave testing until a fairly late stage in the application development lifecycle. The Acutest approach to testing services involves testing early in the lifecycle, and that means we often have to test component parts of a system in isolation, before everything is complete.

Sometimes this can be as simple as choosing a small number of business processes and running a test cycle geared around those. We expect to concentrate on finding application bottlenecks such as coding/design issues in the early stages. The majority of the tests would be ramp tests designed to find bottlenecks by intentionally looking for performance limits in specific areas of interest. This works best when we can combine this with our unique risk-based approach and test the most likely areas of failure - especially those most critical to users. Sometimes that can be a challenge as developers will have a tendency to release the easy bits first !

One thing I always recommend is running a combined load test as early as possible, even if not all parts of the system are stable enough to test and have to be excluded, or throughput requirements are still being agreed. You always learn something - either about the system under test, the test environment, or the test tool. Typically we would spend 20% of time in the first cycle on this, building up to 80% in later cycles. In later cycles we would however leave some time for new ramp tests to be executed, and re-runs of failed tests from earlier cycles.

Generally working co-operatively within a project or programme brings best results. It's recognised by the leading vendors of performance testing tools that a team approach is needed to successfully deliver performance tests on time. Our experience is that benefits can come from including software developers as well. Agile performance testing is possible, with short cycles, tight configuration management and good communication. Recording and customising scripts in short timescales is a challenge but can be achieved. This approach also builds performance awareness into development teams, which brings benefits down the line.

So there are some thoughts on phasing of performance tests. Let me know if these subjects are of interest.

Monday, April 25, 2011

What a breath-taking roller coaster it's been !

Still it's good to be busy, as we are at Acutest. Anyway, I'm reviving and interested in covering in more detail some of the things we've found beneficial over the years such as:

-Agile performance testing, including phasing approaches
-Configuring virtual users and scripts in various performance testing tools
-Approaches to Business process testing, including financial and ERP software from Elite, a Thomson Reuters business, and SAP.

More later folks!

Monday, August 04, 2008

Some of the links to articles by Alberto Savoia mentioned in my earlier article:

Web load testing (article 1) - how to keep the load steady

have moved. Here are the up to date links:

Web load test planning
Trade secrets from a web testing expert
Web page response time 101

Monday, March 27, 2006

Performance monitoring – Windows server monitoring

I have been doing some research on what system monitoring is useful during performance testing.

I have concentrated on a minimum useful set of information needed to verify that no bottlenecks occur when using a performance testing tool. Even this minimal set should be useful though to enable a rough prediction of the maximum load that could be supported by the system under test.

There’s a wealth of information available on this subject. Two sites I found particularly useful were these:

One subject on which there seemed to be a range of opinions was whether it is best to set up monitoring remotely or locally. If you set up remotely there will be some network traffic, but if you set up locally there is the overhead of running perfmon itself. Do you think it is significant either way – if so, in what situations? Perhaps this is only an issue if you are already close to resource utilization limits.

One thing which is probably always true is that it’s best to be selective and only collect the performance counters you need. This document is my attempt to propose a subset which will nearly always be useful. Take a look and let me know what you think. Is there anything obviously missing? Is it too comprehensive?