Web-Based Testing Database

There are a number of defect management tools and a number of test-management tools in the world. Some applications combine the two, Quality Center, QC, aka Application Lifecycle Management, ALM, since the HP purchase of Mercury Interactive is the one I am most familiar with, to the point where I suspect it’s bundled with QTP and Loadrunner by the salesman.

Problem is that Loadrunner doesn’t integrate well with QC, defects tend to be fewer but broader in scope and there seems to be a fundamental issue in behaviour when it comes to analysing and developing fixes for performance that spans defect management teams, the performance testing team and the development team.

I can’t explain why that would be but in 90% of the offices I’ve worked, performance testing defects are tracked outside the defect management process, outside the development process and have very little visibility with management.

The 10% who raise defects in a defect management system are probably doing it right, where everyone else is just being a little short-sighted, lax or potentially negligent.

There is also a peculiar belief in the separation of performance testing and performance tuning into separate streams, as if the teams differ and one doesn’t lead to the other in a seeming endless procession of execution and analysis, and development and release. (I accept that behaviour can be different in this phase, I’m just not always sure it should be. Why compromise the release process just because you’re in a test environment? Seems like the one place, especially a pre-production environment, where that would be incredibly ill-advised.)

I suspect, though I’m biased, that it comes from the performance testing team originally. I hear a lot of “I don’t decide if performance is defective or not, I just measure it and report. Someone else can decide whether it’s acceptable”.
I’ve said it myself. It’s not right though, it’s just an excuse for doing a little less work. I’ve automated checking NFR’s against actuals in the past. Even to the point of automating the entire reporting process up to sending the report out to management. A manual check there is ESSENTIAL in my opinion. (One of my current colleagues disagrees, but he’s not reporting upwards, more horizontally to developers who read the performance by analysis of server perfmon stat’s not by transaction response times.)
Which just proves you always have to consider the audience I suppose.

Anyway whilst it’s true that performance tester are rarely decision-makers, it is absolutely in my opinion our duty to provide the best information possible to those decision-makers and though, I absolutely believe that seperating the performance test results from the NFR’s until the last minute is appropriate and advisable, it’s a relatively easy task to check an NFR against an actual result from Loadrunner and where there is a large-scale negative deviation, say 50% slower than expected, there is no reason whatsoever why a defect shouldn’t be raised.

Matt: The most common reason in my experience is the performance testers don’t like raising defects, and developers don’t like looking at performance defects – Transaction A is slow – it’s not precise enough, presumably. Even with qualifying statements – Transaction A – user login – is outside it’s 5 second NFR with actual’s under load of 9 seconds.

Add in that we work commonly in our own dedicated environment with it’s own baseline, that developers can’t replicate “load” typically – no loadrunner for them – and I can see that it’s painful but really it’s just an exercise of co-ordination and communication.

Now though there is performance center, which appears to be an online version of the loadrunner controller with qcs’ defect management area bundled within. The problem with this seems to be that no-one is buying it.

Which brings me finally to the point of this post, I’m going to work on an online test execution database.

The aim is to provide somewhere for a performance testing team to define the run they’re about to execute, execute it, upload the results and add observations. Ultimately I want to add a comparison engine, somewhere to check against defined NFRs, and it may also be possible to add the facility to actually run the test. Which will necessitate access to the run-time settings for a script and/or the scenario.

But if I can make it do all of that, I can also add a scheduler function to it.

There are a number of factors to all of this, not least security. This is not going to be a quick win, and I’ll need to define the project into sub-components just to keep track of it. Should keep me busy for a while though so that’s nice and it’s an area I’ve not looked at before.

Expect more posts soon on various related topics as I define the scope, the parts and how I’m going to glue it all together.

Leave a Reply

Your email address will not be published. Required fields are marked *