Loadrunner misses extra resources

Recently, I’ve encountered an issue with an AUT where it crashes VUgen during the generation stage of a recording session. Now I’m a big fan of record-edit-playback. I find the recording and generation logs hugely useful and I find that recording is the easiest and quickest way to get a view of the application.
I call it record-edit-playback because the recording isn’t enough on it’s own for me to use as a performing performance test. But as a foundation, it’s fine. And they rarely playback immediately in any case. Session ID’s are designed to prevent that kind of thing.

The logs in particular are great for helping you find and construct parameter-capturing code for session ids, and identifying user variables that you may want to emulate.

My process for recording is very simple though I haven’t documented it here before so I will now:

  • Record the script once
  • Save it as Scriptname_DATE_RAW
  • Save it again as Scriptname_DATE_WIP (work in progress).

That means you can always roll back if you need to. The WIP copy is the working document until it’s finished when it becomes Scriptname_DATE_FINAL. At that point I hive it off to the controller as well as my local repository. I don’t like cluttering up the controller with WIP versions. And I don’t like the controller pulling scripts across the network, I just think it’s poor practice.

But I digress.

As a solution to the fact I couldn’t record the script, I used Fiddler as a proxy to capture the urls I visited when manually executing the script in Firefox. Over on the DevDiary there’s an article about this, but the point I wanted to make was this. Loadrunner doesn’t capture everything that’s on a page. Fiddler output was about 40 lines for the homepage, a Loadrunner visit to just the homepage was capturing 10 lines of resources (I managed to get LR to do that before it died again).

It seems that if a resource (for example a .css) contains sub-resources Fiddler will see that but Loadrunner won’t. I don’t know if that is by accident or design. I don’t know if it’s implied that LR is geettng them but not explicit in the results and the logs. I intend to find out in due course but it makes me wonder how I’ve not seen this before in 15 years of performance testing. Maybe it’s specific to this project, I could believe that, we are uniquely complicated from what I’ve seen. But what if it’s not. How many issues could have been avoided if I’d seen a bottleneck on one of those resources – maybe an underperforming java-script for example? It’s all academic now anyway but it’s certainly something I’ll look out for in the future. And as an aside, maybe Loadrunners recording engine isn’t as good as I’ve always thought it to be? Interesting times. In Belgium…

Extending Loadrunner Scripts with C – Function Library #1.1

Actually, this is more like 1.1. In as much as it ties into the previous post. I was blogging about building audit logs and data files via an “audit” script. That’s what I call them, not sure if there’s a full blown technical name but I use them to verify, validate and build data to be used in actual test scripts.
So let’s suppose you have an array of data you’ve captured with web_reg_save_param (x,y,z,"ord=all",last); this is how to handle that code into an audit log.

vuser_init{

// write file header once

WriteToOutputFile(lr_eval_string(“card,psn,status”));

return 0;
}
The function as defined in the previous post.
int WriteToOutputFile(char * string)
{

char *filename = “c:\\gemalto_audit.txt”;
long file_streamer;

if ((file_streamer = fopen(filename, “a+”)) == NULL)
//open file in append mode
{
lr_error_message (“Cannot open %s”, filename);
return -1;
}

fprintf (file_streamer, “%s\n”, string);
fclose(file_streamer);
return 0;
}

And finally, the function in use…

action
{
char szParamName1[128];
char szParamName2[128];
char szParamName3[128];
...
// get number of matches from ord=all

nCount = atoi(lr_eval_string("{available_cards_psn_count}"));

//"available_cards_count" = 22 - boundaries are insufficiently unique
//"available_cards_psn_count" = 11
//"available_cards_status_count" = 22 - boundaries are insufficiently unique

for (i=1; i<=nCount; i++ )
{
j = i * 2;
sprintf(szParamName1, "{available_cards_%d}", j);
sprintf(szParamName2, "{available_cards_psn_%d}", i);
sprintf(szParamName3, "{available_cards_status_%d}", j);
strcpy(strToOutput,lr_eval_string(szParamName1));
strcat(strToOutput,",");
strcat(strToOutput,lr_eval_string(szParamName2));
strcat(strToOutput,",");
strcat(strToOutput,lr_eval_string(szParamName3));
WriteToOutputFile(strToOutput);
}
}

I find more often than anything else, capturing the data is easy enough, but getting at that data in a structured way in order to use it effectively at a later point can be painful. The above is a real-life example – Developers implementing the content management inconsistently meant that there was nothing uniquely identifying 2 of the fields I needed. If I tightened the left boundary or the right boundary, elements were missed.
I’m not criticizing developers per se, they can’t really be expected to think about a performance tester a year down the life-cycle of the project looking at source-code structure.
The workable solution was to capture the 11 values I needed for one element, the 22 value-pairs for the other elements, and just skip every other element in 2 of the arrays. Inelegant perhaps, it works though and I built that today so it may become beatified over time.

Extending Loadrunner Scripts with C – Function Library

So, I’m working at a new client, back doing the Loadrunner thing. One of the nice things about that is I get to re-use and refine code I’ve written previously for other clients. This article is going to contain some of these code snippets that I’ve used time and time again.

I’ve re-visited this code recently and found that a) it wasn’t very good, and b) I can do it better now – presented below is the better version. There will be an update inviting formatting etc.
And there’s no guarantee this is perfect.

Output to Text file

int WriteToOutputFile(char * string)
{

char *filename = "c:\\myfilename.txt";
long file_streamer;

if ((file_streamer = fopen(filename, "a+")) == NULL)
//open file in append mode
{
lr_error_message ("Cannot open %s", filename);
return -1;
}

fprintf (file_streamer, "%s\n", string);
fclose(file_streamer);
return 0;
}

Called like this:

WriteToOutputFile(lr_eval_string("bban_count: {bbanNumber_count} blah {bbanNumber_count}"));

I just added the function beneath vuser_init rather than creating a header file. For multiple vusers, it’s a good idea to parameterise the filename as they can’t all share. I recommend a vuser identification parameter as thats built in.
Or Timestamps for uniqueness.

I also have a dos script for joining them all back up again since I tend to use this function for creating custom audit logs to track test data states as it moves through a scenario / test cycle.

Basic Approaches to Performance Testing

In which I shall attempt to state that there are 3 and only 3 approaches to performance testing…

1. The comparative method.

We’ll assume you have an application called AUT v1.0, We’ll further assume you have a scenario built to test AUT v1.0 and to hit it sufficiently hard that it’s response times are less than perfect. Ideally it should be walking, not limping and certainly not sprinting along.

We’ll then suggest that v1.1 is coming out soon and that much of the functionality is unchanged. There is always new functionality, that’s the whole point, but it is the level of new-ness that dictates whether this approach can work.

Run your scenario against v1.0 as often as is necessary for consistent timings to be established. I maintain that 3 is the absolute minimum, and that more (often much more is better). Gather your results so that direct comparison between transaction times and runs is possible.

Run 1 2 3 4 5
Transaction A 1 1 1.1 2 1
Transaction B 2 2 2.2 4 2

Continue reading

Emulating Reality taken too far

So I’m working at a client who use, alongside the usual mix of workstations, a hand-held device running on Windows CE. They have 800 of these across the business (though most seem to be sitting with the engineers being repaired, they’re not particularly robust, and don’t react well to percussive maintenance).
To provide a little background, I’m here because the incumbent test/dev team have been advised their services are no longer required. The current Performance team have identified a number of issues, many of which are still outstanding and the focus has now switched to the hand-held terminals (HHT’s).
Long before I arrived, it was decided that performance testing is required for these devices (I agree) and that the best way would be a test script and a stopwatch. To say I disagree would be understating it on a massive level losartan 100mg tab.

I’m all for emulating the real business processes, its just that I know if you “measure” performance with a stopwatch, then you can cease calling it perfomance testing and start calling it “meaningless arbitrary timekeeping signifying nothing”. (Told you I disagreed).
Here at AutomationSolutions, I like to think we look for the better way, I certainly have no interest in wasting my afternoon or my clients money sitting about tapping buttons on an oversized calculator. I’m going to do some research into emulating hand-held operating systems. I’m going to do some research into capturing wireless packets, and simulating that process. I’m going to see if I can identify any way to bring a scientific method to this testing, and if I cannot I’m going to request that we move it immediately out-of-scope or at least admit that we’re more interested in ticking boxes than in accurately measuring performance.

(While I’m thinking about it, I might add a rant tag and some pretty CSS to the site. I suspect this will not be the last word on this topic).

First Steps with Selenium

Matt:
So, Selenium is an open-source toolset for testing web-based applications. It can be found here.
I started working with Selenium recently and I though it’d be an interesting post to document how I went from novice to competent user.

One of the interesting features of Selenium is that is can be used for performance testing. It’s my experience that performance testing is a costly business, so to see an open-source tool that can do it is rare.

Lets begin by looking at what Selenium is comprised of:

  • Selenium IDE – the scripting environment.
  • Selenium RC – allows further scripting and execution with variable data
  • Selenium Grid – Allows for multiple tests (or instances of the same test) to be executed.

It seems to me that the IDE and RC are doing the job of Loadrunners VUGen and Grid is handling the tasks of the controller. More on this in a future post.

At the time of writing, Selenium IDE is available as an add-on for the Firefox browser only, though the tests can be replayed (via RC) on Firefox, IE and Chrome.

Installation

Installation is made easy with the availability of 3 zipped packages (core, rc and grid) and the inclusion of the IDE as a firefox extension, (xpi) which will automatically install into firefox. The folks over at Selenium HQ also recommend getting an element inspector for your browser. As all the Selenium scripting I’ve done is in Firefox, Firebug is highly recommended, and freely available as an add-on or from http://getfirebug.com/ losartan medication.

Start by installing the IDE into firefox, and unpack the zip files somewhere sensible.

Hello Selenium!

Under the tools menu, there should now be an entry for Selenium IDE that leads us to this:

Selenium IDE

More to come, this article is going to be much bigger than I first thought…