The downside to being an automated test contractor is that when you’re actively engaged on a project, there’s little time for doing anything else. It becomes a commercially-motivated cycle of work, eat, sleep, repeat.
Once you’re outside of that cycle, as in not in active work, it’s hard to motivate yourself into thinking about development and code. It’s hard to come up with things that are worth documenting online. I remember when I first became a contractor and every time a contract ended, it was incredibly dispiriting, and if outside a contract long enough, it can be quite depressing*. Maybe it’s just me but I doubt it.

Where there’s no time and no opportunity while on a contract, the opposite doesn’t seem to hold true, given all the time in the world, my brain goes into hibernation and I don’t turn the performance tester part back on until my next interview or job.

And it is for entirely this reason that I haven’t updated either site in over 12 months. It’s actually a good thing since during that time I have mostly been busy working at a couple of different gigs. My point is it’s hard for anyone to tell since the preparation of content doesn’t happen whether I’m working or not.

As in most things it seems to be cyclical, and there seem to be some common factors which equate to me sitting down to post.

  • I need to be working but not busy.
  • I need to be engaged on something new, interesting or difficult
  • I need to be able to access the site during working hours, and most employers lock down ftp ports and disallow software for uploading images etc
  • That’s pretty much the full list. I find it difficult to sit in front of a computer all day, and then go home / back to the hotel and sit in front of a computer some more. Even my world of warcraft account doesn’t get attended to…

    So I could once again promise to write an article on jmeter, loadrunner, performance center etc, but you’d be advised to ignore that promise, or take it with a very large pinch of salt. I’ll come up with something eventually…

    In any case, I now have a new contract, at IKEA, looking at performance testing with Loadrunner, ALM, Performance Center and using Splunk to analyse the results – which is new to me.

    It’s beginning to shape up into that magical configuration of things I’d write about.

    * Strangely that effect has diminished over time, and self-belief kicked in instead. Now when I’m out of work, I find I’m motivated by going to the gym, relaxing, catching up with things I don’t normally have time for. Effectively you KNOW you’ll be working again soon so you make the most of it.

    We continue…

    As I began to discuss in the last article, I have a working version of a java-based jmeter-runner from a colleague that’s operating in eclipse so happy days. I have no idea yet how it works, but I’ll get to it eventually.

    I also created the excel version of the same, for the largest set of tests (21 groups within a single test). And that is operating more-or-less successfully at the moment – I had some fun with deleting the temporary batch file created – since early deletion stopped the test – so I created a simple time-counter to make it wait. This is not ideal as it hogs processor time for no real reason so I’ll be revisiting that approach eventually. As a proof of concept I’m happy with it.


    11 Months Later and counting…

    Ok, I’ll hold my hands up, I’ve been a bit remiss with the updates and the projects and the ra,ra,ra. In my defence I’ve been busy working for a living and learning some interesting new things, not least Apache JMeter.

    I knew JMeter a little bit from previous projects, but my latest assignment is 100% JMeter for performance testing so that’s been a steep learning curve (although being paid to learn doesn’t hurt either).

    In any case, I’ve gotten scripting covered and modularised test fragments have been produced. That was surprisingly easy, partly because it was handled by other people beside myself and I’m looking to leverage their automated tests to form a large part of the performance test pack, and partly because much of the testing is submitting XML messages to specified endpoints and once you have the basic mechanism covered, it essentially a copy + paste job with some small adjustments (usually the xml payload, which may or may not be parameterised).

    You may be asking so what’s with todays post then, Matt?, and I shall tell you now.

    A jmeter script / scenario – “.jmx” file can take variables from the command line and that is the preferred approach for most people since they’re using small sets of scripts, typically one, or re-using variables or running from the gui. The performance test pack on the other hand consists of multiple scripts within the jmx, with all the usual runtime settings necessary needing to be provided in one way or another. At the moment, the pack contains ~20 different scripts, with their attendant calls to datafiles etc.

    I could hard-code the vusers, the pacing, the duration, the ramp-up settings, the thinktime, the paths to input data and output data, the reports directory etc.
    I could prepare the command line request to call all 30 scripts with those settings but I can practically guarantee an errant space or a misplaced -j and then a test would be ruined, data burned, environments would need resetting, civilisations would fall, economies fail and life would probably cease on this planet. Or, an hour of time may be wasted. Definitely one of those.

    So I want to create an interface to the batch file – and this is where I’m annotating that desire and figuring how much effort that’s going to be.

    Within each jmeter call, I can setup the variables placeholder, starting with threadgroup – Number of threads = Number of vUsers. If I set each group to 0 by default then within the call, I can activate only the groups (scripts) necessary.

    I should add that along with the 20 scripts for the major system of records (which is actually 2 applications sharing a database), there are additional stubs, interfaces and applications which need to be performance tested, potentially simultaneously (actually, definitely simultaneously, I was just underplaying the complexity 🙂 ).

    Demo.JMX and Demo1.bat

    So, to begin with, I knocked up the tests, fragments etc. That process will form another article at some point in the future, all about JMeter, but it’s going to cover the basics and I don’t have the time to revisit that at the moment.

    The jmx is documented in the images below. It only has 2 fragments as a proof of concept, login and logout.

    Alongside the jmx file is a batch file which is used to launch the jmx with it’s runtime settings declared.

    @echo off cls setlocal REM cd /d %~dp0 pushd %~dp0

    Call ..\..\apache-jmeter-2.11\bin\jmeter -n ^
    -t demo1.jmx ^
    -Jlogin=su ^
    -Jpassword=gw ^
    -Jthreads=5 ^
    -Jramp=0 ^
    -Jduration=120 ^
    -Jserver=qa2.bolt.admiral.uk ^
    -Jport=8081 ^
    -Jprotocol=http ^


    I found that you can either have declared iterations or duration, not both, and additionally it doesn’t appear to be possible to declare them both, populate only one and ignore the other. It just didn’t work for me at all. The investigation continues but for now, I’m going to use duration, setting it to a low-level when I need single(ish) iterations.

    This works adequately / perfectly well depending on whether duration is an acceptable or perfect solution for you. And I should add there’s no reason why I couldn’t create a second demo.jmx with the iterations populated. I could even have the batch file decide on that conditional which jmx to execute.

    (Maybe later because…)

    The issue isn’t with that, it works fine for me, but when I look at the full test, I find I have this:

    @echo off cls setlocal REM cd /d %~dp0 pushd %~dp0

    Call ..\..\apache-jmeter-2.11\bin\jmeter -n ^
    -t ModularTest.jmx ^
    -Jlogin=su ^
    -Jpassword=gw ^

    -Jthreads1=5 ^
    -Jthreads2=5 ^
    -Jthreads3=5 ^
    -Jthreads4=5 ^
    -Jthreads5=5 ^
    -Jthreads6=5 ^
    -Jthreads7=5 ^
    -Jthreads8=5 ^
    -Jthreads9=5 ^
    -Jthreads10=5 ^
    -Jthreads11=5 ^
    -Jthreads12=5 ^
    -Jthreads13=5 ^
    -Jthreads14=5 ^
    -Jthreads15=5 ^
    -Jthreads16=5 ^
    -Jthreads17=5 ^
    -Jthreads18=5 ^
    -Jthreads19=5 ^
    -Jthreads20=5 ^
    -Jthreads21=5 ^

    -Jramp1=0 ^
    -Jramp2=0 ^
    -Jramp3=0 ^
    -Jramp4=0 ^
    -Jramp5=0 ^
    -Jramp6=0 ^
    -Jramp7=0 ^
    -Jramp8=0 ^
    -Jramp9=0 ^
    -Jramp10=0 ^
    -Jramp11=0 ^
    -Jramp12=0 ^
    -Jramp13=0 ^
    -Jramp14=0 ^
    -Jramp15=0 ^
    -Jramp16=0 ^
    -Jramp17=0 ^
    -Jramp18=0 ^
    -Jramp19=0 ^
    -Jramp20=0 ^
    -Jramp21=0 ^

    -Jduration=120 ^

    -Jserver=qa2.bolt.admiral.uk ^
    -Jport=8081 ^
    -Jprotocol=http ^


    And that’s just for the one system, there are currently 8 others, so I have to figure out a way around this now, and I’m thinking:

    a) Simple – build a dashboard in excel which allows the batch file to be constructed and executed on the fly. For all systems I could have a tab and a dashboard and a single execute button. And I know this could work (because I’ve built part of it already but …)
    b) Complex – build a gui-based front-end dashboard which will construct and execute the batch files.

    For each system, there will need to be a seperate bat file to call a seperate jmx file (the directory structure is identical but each has it’s own root directory as per the picture above, otherwise I could centralise the whole thing). Calling multiple bat files from a bat file is the easier workaround incidentally but I like GUI’s and it’ll look entirely more professional.

    A colleague of mine has created a simple interface in Eclipse to do this for a single test so I’m going to leverage that and extend.

    Lizard Brain Thought Process

    I’ve been running a test against a web-served memory-cached database. It’s supposed to provide personalised tracking information on the banks website.
    Bizarrely, given our usual results, it’s amazingly fast: 0.007s per transaction with 50 users and a “breakpoint” of around 1800 transactions per second. 500 per second was the maximised NFR – it was 470 really but I like round numbers.

    The reason I put breakpoint in speech-marks by, the way, is quite simple – nothings broken, no NFR has been breached, the server is up and responsive. But the throughput has bottlenecked and adding more load isn’t having any effect. Clearly there’s a resource issue somewhere but no-one’s looking for it. We’re ~4 times our maximum load and still well below the transaction time requirements.

    Now I’m fine with that but it’s left me at a bit of a loose end, this project was meant to run until the end of the week. It’s barely lunchtime on thursday as I’m typing this. So, I was thinking of something else I could do until the end of the week before I scream for more work (hey, it’s Thursday, nothing too taxing to lead up to the weekend).

    And so, I was wondering:

    There are many different ways to take the data from a web-page form and insert it into a database. Off the top of my head – Perl, PHP, VBScript and JavaScript… that took me about 3 seconds to think of and 20 to type… there will be many many more. But which is fastest.

    It occurs to me that in order to tell the difference the payload of the form would need to be extra-ordinarily high, think 100s of elements rather than the usual 5-10. Also that I’ll need to hit the server quite hard to make it struggle a bit. I’d expect at low-load-levels there will be virtually no difference. But at higher load, there might be a clear victor…

    I’m betting on PHP as it most cleanly translates directly into the database insertion. If I can find a c-script method, I’d go for that but I’m not sure such a thing exists yet.

    Anyway, I’m gonna see if I can find out. Place your bets

    Also, it might be interesting to see which is quickest to retrieve that data once inserted. At the very least, I’ll refresh my scripting skills in a few different languages.

    Web-Based Testing Database #2

    So I’ve done the obvious analysis of the performance testing process.

    1. 1. Pre-Test Criteria
    2. 2. Test Execution
    3. 3. Post-Test Results

    Clearly that lends itself to a least one form for the run details and a second form for the post-run details.
    I believe a single table might be the easiest approach – in any case a single primary key of run id will be used – it will be autoassigned within my database. I will also have a test timestamp and a test name but the run id will be the only key.

    The Pre-Test information shall consist of, but not be limited to:

    • A test run id
    • A test timestamp
    • A test name
    • environment
    • scripts, run_time setting, vusers
    • test purpose

    EDIT 23/07/2013 :
    I’ve re-visited this list – today – I’m not sure how best to track the scripts included and the run time settings – it’s a pain to enter that stuff into a form by hand – ideally i’d steal it from the scenario details. Failing that I just won’t include that information directly

    I’ve started work on a variety of pages and the database itself. I have a form that accepts some of the inputs mentioned above, I have a working script for uploading and unpacking a zip file. I have a page that will pull the results in a structured (tabular) format. That looks AWFUL in this particular wordpress template though.

    I’ve basically been pressing on with learning PHP and how best to structure this into a single validated page. I’ll post all of the source code on here in the next few posts. Once I’ve sanitised them.

    Web-Based Testing Database

    There are a number of defect management tools and a number of test-management tools in the world. Some applications combine the two, Quality Center, QC, aka Application Lifecycle Management, ALM, since the HP purchase of Mercury Interactive is the one I am most familiar with, to the point where I suspect it’s bundled with QTP and Loadrunner by the salesman.

    Problem is that Loadrunner doesn’t integrate well with QC, defects tend to be fewer but broader in scope and there seems to be a fundamental issue in behaviour when it comes to analysing and developing fixes for performance that spans defect management teams, the performance testing team and the development team.

    I can’t explain why that would be but in 90% of the offices I’ve worked, performance testing defects are tracked outside the defect management process, outside the development process and have very little visibility with management.

    The 10% who raise defects in a defect management system are probably doing it right, where everyone else is just being a little short-sighted, lax or potentially negligent.

    There is also a peculiar belief in the separation of performance testing and performance tuning into separate streams, as if the teams differ and one doesn’t lead to the other in a seeming endless procession of execution and analysis, and development and release. (I accept that behaviour can be different in this phase, I’m just not always sure it should be. Why compromise the release process just because you’re in a test environment? Seems like the one place, especially a pre-production environment, where that would be incredibly ill-advised.)

    I suspect, though I’m biased, that it comes from the performance testing team originally. I hear a lot of “I don’t decide if performance is defective or not, I just measure it and report. Someone else can decide whether it’s acceptable”.
    I’ve said it myself. It’s not right though, it’s just an excuse for doing a little less work. I’ve automated checking NFR’s against actuals in the past. Even to the point of automating the entire reporting process up to sending the report out to management. A manual check there is ESSENTIAL in my opinion. (One of my current colleagues disagrees, but he’s not reporting upwards, more horizontally to developers who read the performance by analysis of server perfmon stat’s not by transaction response times.)
    Which just proves you always have to consider the audience I suppose.

    Anyway whilst it’s true that performance tester are rarely decision-makers, it is absolutely in my opinion our duty to provide the best information possible to those decision-makers and though, I absolutely believe that seperating the performance test results from the NFR’s until the last minute is appropriate and advisable, it’s a relatively easy task to check an NFR against an actual result from Loadrunner and where there is a large-scale negative deviation, say 50% slower than expected, there is no reason whatsoever why a defect shouldn’t be raised.

    Matt: The most common reason in my experience is the performance testers don’t like raising defects, and developers don’t like looking at performance defects – Transaction A is slow – it’s not precise enough, presumably. Even with qualifying statements – Transaction A – user login – is outside it’s 5 second NFR with actual’s under load of 9 seconds.

    Add in that we work commonly in our own dedicated environment with it’s own baseline, that developers can’t replicate “load” typically – no loadrunner for them – and I can see that it’s painful but really it’s just an exercise of co-ordination and communication.

    Now though there is performance center, which appears to be an online version of the loadrunner controller with qcs’ defect management area bundled within. The problem with this seems to be that no-one is buying it.

    Which brings me finally to the point of this post, I’m going to work on an online test execution database.

    The aim is to provide somewhere for a performance testing team to define the run they’re about to execute, execute it, upload the results and add observations. Ultimately I want to add a comparison engine, somewhere to check against defined NFRs, and it may also be possible to add the facility to actually run the test. Which will necessitate access to the run-time settings for a script and/or the scenario.

    But if I can make it do all of that, I can also add a scheduler function to it.

    There are a number of factors to all of this, not least security. This is not going to be a quick win, and I’ll need to define the project into sub-components just to keep track of it. Should keep me busy for a while though so that’s nice and it’s an area I’ve not looked at before.

    Expect more posts soon on various related topics as I define the scope, the parts and how I’m going to glue it all together.

    Ok, so what’s next?

    Since the majority of the work on the excel version of the batch scheduler is now complete, I find myself wondering what to work on next. I could absolutely press on with a stand-alone executable version – but I like to mix things up a bit, keep them from getting stale.
    The obvious downside to that is by the time I revisit the excel code I’ll have forgotten how it works. But then, that is precisely why this site exists – to document the content of my brain.

    I’m a little busy right now with sorting out some test data for a mobile application, but I feel like working on something web-based. I’ll make my mind up in a day or so but I have a feeling it’ll be the results database and comparison engine. Which clearly needs a better name. Although like my usual naming convention – it is what it says it is.

    As I see it, there are a number of elements to any test result database even before you add comparison capabilities. You need a pre-test form into which go all the usual run time settings, purpose of test, time and date, Environment in use, etc.
    You need a test result form into which the results can be uploaded, and that may need to house all of the analysis files from LR, and the HTML report. To keep the size down, I’d like to upload both as compressed files. No idea how to do that yet.
    It should also contain any tester-observations from the execution.
    So we have at least 2 tables so far.

    I’m going to spend a bit more time thinking on this…

    Loadrunner Batch Scheduler – Excel Version #3

    I’ll be honest, I’m surprised at how quickly this has come together. I didn’t even know if I could make it work when I began, and I’ve managed to fit it in with doing full time performance testing and living life as an international contractor.

    It’s not “finished” but then nothing ever really is in AutomationSolutions-world. There is scope to extend beyond the 15 or 16 runs it currently caters for. I can make that ticker a bit less intrusive too I expect. But the fact is that it works… and it works rather well.

    It’s not the prettiest spreadsheet in the world, and there are other features I’d like to add to allow for user configuration. But they’re my changes. You’re going to have to ask nicely for them. Or you can use v1.0, the basic scheduler. Which is provided here:


    It’s taken a little over a week to build that and roll it out in Belgium, we’re using it this evening for the next cycle of tests. If there’s no update on this following that, you’d be safe to assume that it works as reliably as I think it does.

    I hope it’s usage is fairly self-explanatory since I haven’t prepared any sort of documentation yet, and I’m not likely to unless someone screams.

    Loadrunner Batch Scheduler – Excel Version #2

    From the previous post, it should be possible to see that the next_tick function which counts the time is our trigger for execution.
    Specifically this loop and if combination:

    For i = 0 To UBound(ArrTimes)
    If [A1].Value = ArrTimes(i) Then
    Application.StatusBar = "Running test id: " & Range("A" & i + 6).Value

    End If
    Next i

    Since I first wrote that last week, I’ve added a field to state that a run has been “done”. And I’ve built in a little tolerance by checking if the current time is greater than the execution start time – stored as ArrTime(n).

    I’ve also borrowed some code from VBA Express to terminate a running process. Now I’m not terminating the process, I’m just checking if WLRun.exe aka the controller is running.

    So the latest code looks like this:

    For i = 0 To UBound(ArrTimes)
    If [A1].Value >= ArrTimes(i) And Range("F" & i + 6) <> "DONE" Then

    'Check if LR is already running
    strTerminateThis = "wlrun.exe" 'Process to terminate,

    Set objWMIcimv2 = GetObject("winmgmts:" _
    & "{impersonationLevel=impersonate}!\\.\root\cimv2") 'Connect to CIMV2 Namespace

    Set objList = objWMIcimv2.ExecQuery _
    ("select * from win32_process where name='" & strTerminateThis & "'") 'Find the process to terminate

    If objList.Count = 0 Then 'If 0 then process isn't running
    Application.StatusBar = "Running test id: " & Range("A" & i + 6).Value
    Range("F" & i + 6) = "DONE"
    Call StartTest(Range("B" & i + 6).Value, Range("C" & i + 6).Value)


    Application.StatusBar = "Loadrunner Controller is still running"

    End If

    End If
    Next i

    I’ve just replaced the existing section with that, but I’ll post the full deliverable at some point (when it’s finished 🙂 )

    I’ve also defined the module to call the controller.

    Sub StartTest(Scenario_Path, Results_Path)

    'Wlrun.exe -Run -TestPath scenario.lrs -ResultName res_folder
    strCommand1 = Range("D1").Value 'Path to WLRun.exe
    strCommand2 = Scenario_Path
    strCommand3 = Results_Path

    strCommand = strCommand1 & " -Run - TestPath " & strCommand2 & " -ResultName " & strCommand3
    MsgBox (strCommand)
    'Shell (strCommand)

    End Sub

    At the moment it just message_boxes the command string for testing purposes.

    So now, we have:

    A time-based system within Excel which will run a defined scenario at the appropriate time. I haven’t figured out yet what I’d like to do if WLRun is still running, but it’s coming together and I think that’s the last thing. I think I’ll add a minute on to all future runs to delay the system.

    I also added an autoschedule function, mostly for testing purposes. I’ll document that in a future post, it’s not really important.

    Loadrunner Batch Scheduler – Excel Version

    I’ve made a start on the Loadrunner Batch Scheduler – There is still a long long way to go, but the basis of the underlying code and the basic design is done, I guess.

    The Dashboard


    The Code

    The code below implements a timer – showing the current time in cell A1.

    Pushing the start button fires StartBtn_Click, which reads our run variables into an array, and starts the timer.
    StopBtn_Click stops the timer.
    Then there is next_tick which checks our array of times against the current time. If it takes longer than a second to parse the array I suppose it’s possible to not fire the code, but I’m only allowing for 14 runs at the moment.

    I’ll look to build 10 seconds of tolerance into it at some point, but then it will need to know that it’s executed on the 1st second so as not to try to fire on the 9 subsequent ones. (A “done” flag essentially).

    Dim ArrTimes(14)

    Dim StopTimer As Boolean
    Dim SchdTime As Date
    Dim Etime As Date
    Const OneSec As Date = 1 / 86400#

    Sub StopBtn_Click()
    StopTimer = True
    Etime = Now
    [A1].Value = Time()
    End Sub

    Sub StartBtn_Click()

    StopTimer = False
    SchdTime = Now()
    [A1].Value = Format(Etime, "hh:mm:ss")
    Application.OnTime SchdTime + OneSec, "Sheet1.NextTick"

    For i = 0 To 13
    RangeObject = "D" & i + 6
    If Range(RangeObject).Value <> "" Then
    ArrTimes(i) = Range(RangeObject).Value
    End If

    Next i

    End Sub

    Sub NextTick()
    If StopTimer Then
    'Don't reschedule update
    [A1].Value = Format(Etime, "hh:mm:ss")
    SchdTime = SchdTime + OneSec
    Application.OnTime SchdTime, "Sheet1.NextTick"
    Etime = Now + OneSec

    For i = 0 To UBound(ArrTimes)
    If [A1].Value = ArrTimes(i) Then
    Application.StatusBar = "Running test id: " & Range("A" & i + 6).Value

    End If
    Next i
    End If

    End Sub

    I should add by way of a disclaimer, I have absolutely no idea if this is going to work at all, let alone reliably, at this point. But It’s a promising start in my opinion. Oh, and I do indent my code in the IDE, but WordPress or this theme, wipe that out. My code is pretty, honestly.