Web-Based Testing Database #2

So I’ve done the obvious analysis of the performance testing process.

  1. 1. Pre-Test Criteria
  2. 2. Test Execution
  3. 3. Post-Test Results

Clearly that lends itself to a least one form for the run details and a second form for the post-run details.
I believe a single table might be the easiest approach – in any case a single primary key of run id will be used – it will be autoassigned within my database. I will also have a test timestamp and a test name but the run id will be the only key.

The Pre-Test information shall consist of, but not be limited to:

  • A test run id
  • A test timestamp
  • A test name
  • environment
  • scripts, run_time setting, vusers
  • test purpose

EDIT 23/07/2013 :
I’ve re-visited this list – today – I’m not sure how best to track the scripts included and the run time settings – it’s a pain to enter that stuff into a form by hand – ideally i’d steal it from the scenario details. Failing that I just won’t include that information directly

I’ve started work on a variety of pages and the database itself. I have a form that accepts some of the inputs mentioned above, I have a working script for uploading and unpacking a zip file. I have a page that will pull the results in a structured (tabular) format. That looks AWFUL in this particular wordpress template though.

I’ve basically been pressing on with learning PHP and how best to structure this into a single validated page. I’ll post all of the source code on here in the next few posts. Once I’ve sanitised them.

Web-Based Testing Database

There are a number of defect management tools and a number of test-management tools in the world. Some applications combine the two, Quality Center, QC, aka Application Lifecycle Management, ALM, since the HP purchase of Mercury Interactive is the one I am most familiar with, to the point where I suspect it’s bundled with QTP and Loadrunner by the salesman.

Problem is that Loadrunner doesn’t integrate well with QC, defects tend to be fewer but broader in scope and there seems to be a fundamental issue in behaviour when it comes to analysing and developing fixes for performance that spans defect management teams, the performance testing team and the development team.

I can’t explain why that would be but in 90% of the offices I’ve worked, performance testing defects are tracked outside the defect management process, outside the development process and have very little visibility with management.

The 10% who raise defects in a defect management system are probably doing it right, where everyone else is just being a little short-sighted, lax or potentially negligent.

There is also a peculiar belief in the separation of performance testing and performance tuning into separate streams, as if the teams differ and one doesn’t lead to the other in a seeming endless procession of execution and analysis, and development and release. (I accept that behaviour can be different in this phase, I’m just not always sure it should be. Why compromise the release process just because you’re in a test environment? Seems like the one place, especially a pre-production environment, where that would be incredibly ill-advised.)

I suspect, though I’m biased, that it comes from the performance testing team originally. I hear a lot of “I don’t decide if performance is defective or not, I just measure it and report. Someone else can decide whether it’s acceptable”.
I’ve said it myself. It’s not right though, it’s just an excuse for doing a little less work. I’ve automated checking NFR’s against actuals in the past. Even to the point of automating the entire reporting process up to sending the report out to management. A manual check there is ESSENTIAL in my opinion. (One of my current colleagues disagrees, but he’s not reporting upwards, more horizontally to developers who read the performance by analysis of server perfmon stat’s not by transaction response times.)
Which just proves you always have to consider the audience I suppose.

Anyway whilst it’s true that performance tester are rarely decision-makers, it is absolutely in my opinion our duty to provide the best information possible to those decision-makers and though, I absolutely believe that seperating the performance test results from the NFR’s until the last minute is appropriate and advisable, it’s a relatively easy task to check an NFR against an actual result from Loadrunner and where there is a large-scale negative deviation, say 50% slower than expected, there is no reason whatsoever why a defect shouldn’t be raised.

Matt: The most common reason in my experience is the performance testers don’t like raising defects, and developers don’t like looking at performance defects – Transaction A is slow – it’s not precise enough, presumably. Even with qualifying statements – Transaction A – user login – is outside it’s 5 second NFR with actual’s under load of 9 seconds.

Add in that we work commonly in our own dedicated environment with it’s own baseline, that developers can’t replicate “load” typically – no loadrunner for them – and I can see that it’s painful but really it’s just an exercise of co-ordination and communication.

Now though there is performance center, which appears to be an online version of the loadrunner controller with qcs’ defect management area bundled within. The problem with this seems to be that no-one is buying it.

Which brings me finally to the point of this post, I’m going to work on an online test execution database.

The aim is to provide somewhere for a performance testing team to define the run they’re about to execute, execute it, upload the results and add observations. Ultimately I want to add a comparison engine, somewhere to check against defined NFRs, and it may also be possible to add the facility to actually run the test. Which will necessitate access to the run-time settings for a script and/or the scenario.

But if I can make it do all of that, I can also add a scheduler function to it.

There are a number of factors to all of this, not least security. This is not going to be a quick win, and I’ll need to define the project into sub-components just to keep track of it. Should keep me busy for a while though so that’s nice and it’s an area I’ve not looked at before.

Expect more posts soon on various related topics as I define the scope, the parts and how I’m going to glue it all together.

Ok, so what’s next?

Since the majority of the work on the excel version of the batch scheduler is now complete, I find myself wondering what to work on next. I could absolutely press on with a stand-alone executable version – but I like to mix things up a bit, keep them from getting stale.
The obvious downside to that is by the time I revisit the excel code I’ll have forgotten how it works. But then, that is precisely why this site exists – to document the content of my brain.

I’m a little busy right now with sorting out some test data for a mobile application, but I feel like working on something web-based. I’ll make my mind up in a day or so but I have a feeling it’ll be the results database and comparison engine. Which clearly needs a better name. Although like my usual naming convention – it is what it says it is.

As I see it, there are a number of elements to any test result database even before you add comparison capabilities. You need a pre-test form into which go all the usual run time settings, purpose of test, time and date, Environment in use, etc.
You need a test result form into which the results can be uploaded, and that may need to house all of the analysis files from LR, and the HTML report. To keep the size down, I’d like to upload both as compressed files. No idea how to do that yet.
It should also contain any tester-observations from the execution.
So we have at least 2 tables so far.

I’m going to spend a bit more time thinking on this…