Loadrunner misses extra resources

Recently, I’ve encountered an issue with an AUT where it crashes VUgen during the generation stage of a recording session. Now I’m a big fan of record-edit-playback. I find the recording and generation logs hugely useful and I find that recording is the easiest and quickest way to get a view of the application.
I call it record-edit-playback because the recording isn’t enough on it’s own for me to use as a performing performance test. But as a foundation, it’s fine. And they rarely playback immediately in any case. Session ID’s are designed to prevent that kind of thing.

The logs in particular are great for helping you find and construct parameter-capturing code for session ids, and identifying user variables that you may want to emulate.

My process for recording is very simple though I haven’t documented it here before so I will now:

  • Record the script once
  • Save it as Scriptname_DATE_RAW
  • Save it again as Scriptname_DATE_WIP (work in progress).

That means you can always roll back if you need to. The WIP copy is the working document until it’s finished when it becomes Scriptname_DATE_FINAL. At that point I hive it off to the controller as well as my local repository. I don’t like cluttering up the controller with WIP versions. And I don’t like the controller pulling scripts across the network, I just think it’s poor practice.

But I digress.

As a solution to the fact I couldn’t record the script, I used Fiddler as a proxy to capture the urls I visited when manually executing the script in Firefox. Over on the DevDiary there’s an article about this, but the point I wanted to make was this. Loadrunner doesn’t capture everything that’s on a page. Fiddler output was about 40 lines for the homepage, a Loadrunner visit to just the homepage was capturing 10 lines of resources (I managed to get LR to do that before it died again).

It seems that if a resource (for example a .css) contains sub-resources Fiddler will see that but Loadrunner won’t. I don’t know if that is by accident or design. I don’t know if it’s implied that LR is geettng them but not explicit in the results and the logs. I intend to find out in due course but it makes me wonder how I’ve not seen this before in 15 years of performance testing. Maybe it’s specific to this project, I could believe that, we are uniquely complicated from what I’ve seen. But what if it’s not. How many issues could have been avoided if I’d seen a bottleneck on one of those resources – maybe an underperforming java-script for example? It’s all academic now anyway but it’s certainly something I’ll look out for in the future. And as an aside, maybe Loadrunners recording engine isn’t as good as I’ve always thought it to be? Interesting times. In Belgium…

Automated Testing Best Practices (Basic)

I wanted to prepare an article that specifies the very basic best practices for Automated Testing, whether its functional or non-functional. It seems to me that all the tools, whether open-source or commercial allow for these essential practices, presumably because this is the way we should all be doing it… some of us aren’t, you know who you are and you’re very naughty. Also your testing isn’t as good as it should be…

This article is very much a work-in-progress as I braindump…

  1. All scripts should have a manual equivalent, because this is the foundation upon which the automation is built. It doesn’t have to be complicated, it doesn’t have to be as thorough as the automated script, but it does have to accurately specify the steps to follow.
  2. Some of the tools are best used in a record-edit-playback loop (Loadrunner for example). Others pretty much require you to edit them as you progress through your script, adding checkpoints and things as you go (QTP springs to mind). If its a pain to go and insert checks after the recording, that is just tough. Your test is only as good as the conditions it checks for.
  3. A good understanding of the tool is required before you begin any testing phase. Yes, we all have to start somewhere, and that somewhere should always be with training additional reading.
  4. All scripts have transactions, whether its a page click, a text entry, a mouseclick, whatever. Defining your test by these transactions makes debugging easier, makes the scripts more readable and ultimately makes you look professional. Alternatively, a multi-step business process should be mapped as a single logical transaction.
  5. All scripts should include checkpoints for every “transaction” regardless of whether there’s a condition defined. Again its not complicated but your checkpoints need to be sufficient that if something is broken AT ANY POINT your test can detect it.
    If you check for something unique to where you are in the AUT and set pass/fail criteria around that, that’s perfect. The sooner a test which has failed is known to have failed is the sooner the processor is moving on the next activity, saving time and money. And the sooner you’ll be debugging which is one of the fun bits in my opinion

Basic Approaches to Performance Testing

In which I shall attempt to state that there are 3 and only 3 approaches to performance testing…

1. The comparative method.

We’ll assume you have an application called AUT v1.0, We’ll further assume you have a scenario built to test AUT v1.0 and to hit it sufficiently hard that it’s response times are less than perfect. Ideally it should be walking, not limping and certainly not sprinting along.

We’ll then suggest that v1.1 is coming out soon and that much of the functionality is unchanged. There is always new functionality, that’s the whole point, but it is the level of new-ness that dictates whether this approach can work.

Run your scenario against v1.0 as often as is necessary for consistent timings to be established. I maintain that 3 is the absolute minimum, and that more (often much more is better). Gather your results so that direct comparison between transaction times and runs is possible.

Run 1 2 3 4 5
Transaction A 1 1 1.1 2 1
Transaction B 2 2 2.2 4 2

Continue reading

Future Development

I’ve been crazy-busy working, and have neglected the site for a while. I’m still crazy-busy but work on the site is continuing although most of it seems to be behind the scenes at the moment.

Currently I’m building a multi-form php application with a db to provide something to script against. Once that’s done I’ll be re-working and completing the article on beginning QTP.
I’ve a future article on end-to-end QTP automation to write, based on a project I’ve just completed at a former client. The software part of that is actually done although it needs to be anonymised. It uses scheduled tasks, vb scripts, and excel with vba to collate the results.

There is so much more to prepare and write posts on, I think it might finally be time to prepare a “to do” list.

1. Sort out the ad’s on this site so that they are pointing to genuine things…
2. Build some test app’s
3. Catch up on the selenium article