Category Archives: Testing

C++ Unit Testing

Unit testing in managed languages such as Java and C# is easy. The general theory is that you create a Test annotation and apply that to your test functions. Next, you create a series of assert functions that throw an AssertFailed exception if an assertion doesn’t match what you expect. Finally, create a test runner that scans your assemblies or JARs using reflection for functions marked with your Test annotation and invoke them. The test runner just needs to catch any exceptions thrown by the test functions and report them somewhere. The test runner doesn’t have to care too much if the exception thrown is a failed assert or something more serious such as a null pointer exception, it can handle both of them in pretty much the same way. Tools such as NUnit or TestNG will provide all this for you so you will very rarely ever need to write any of this yourself.

With C++, things aren’t quite so easy. There isn’t really any form of annotations or reflection so discovering tests is harder. You do have exceptions, but you might not be able to use them due to the environment you are using and you don’t get call stacks with them either. And anyway, you could just get a fatal crash deep in the code you’re testing before you ever get the chance to throw an exception from one of your assertions.

This doesn’t mean that you can’t get C++ unit testing frameworks with a similar level of functionality as the ones for managed languages, Google Test is a pretty good one for example and CppUnitLite2 is another example of a very portable framework. I want to take a look at how a C++ unit testing framework could be implemented as I find it an interesting problem.

Goals

  • Easy to implement test functions that can be discovered by the test runner.
  • Assert functions that will tell me what test has failed along with a call stack.
  • Fatal application crashes won’t kill the test runner but are reported as failed tests along with a call stack.
  • Possible to plug into an continuous integration build server so that it can evaluate if a build is stable or not.

For my example framework, I’ll only be targeting Unix type platforms (Linux, Mac OS) as the methods I’ll be using are cleaner to implement making it easier to explain the theory. This also allows me to to provide a sample that will work on Ideone so you can have a play with the framework and see it running without needing to download any code.

The framework I present here takes its inspiration from Google Test so I highly recommend taking a look at that.

The Sample Framework

You can try out my sample framework on Ideone. Due to Ideone being primarily a code fiddle site to try out ideas, all your code must live in a single source file so don’t judge the structure too harshly! Normally you would separate everything out a bit and have clear interfaces between the test runner and your tests.

Test Function Registration

This is achieved by defining a macro to generate the test function declaration. The macro also creates a static object that contains the test function details and registers itself with the test runner in its constructor. The test function details contain the name of the function, the source file, line number and a pointer to the function to execute. They can then be stored in a simple linked list for the test runner to iterate over when it comes to run the tests. By using static objects, we can ensure that all our tests are registered automatically before main() is executed saving us the need to explicitly call a set up function that contains a list of all our test functions that needs to be maintained as new tests are added.

Test Reference Class

[code language=”cpp” firstline=”36″]
//—————————————————————————————–//
// Class for storing reference details for a test function
// Test references are stored in a simple linked list
//—————————————————————————————–//
// Type def for test function pointer
typedef void (*pfnTestFunc)(void);

// Class to store test reference data
class TestRef
{
public:
TestRef(const char * testName, const char * filename, int lineNumber, pfnTestFunc func)
{
function = func;
name = testName;
module = filename;
line = lineNumber;
next = NULL;

// Register this test function to be run by the main process
registerTest(this);
}

pfnTestFunc function; // Pointer to test function
const char * name; // Test name
const char * module; // Module name
int line; // Module line number
TestRef * next; // Pointer to next test reference in the linked list
};

// Linked list to store test references
static TestRef * s_FirstTest = NULL;
static TestRef * s_LastTest = NULL;
[/code]

This is a pretty simple class as it doesn’t need to do much more than register itself. In my sample, registerTest() is a global function that just add the object to the linked list.

[code language=”cpp” firstline=”206″]
// Add a new test to the linked list
void registerTest(TestRef * test)
{
if (s_FirstTest == NULL)
{
s_FirstTest = test;
s_LastTest = test;
}
else
{
s_LastTest->next = test;
s_LastTest = test;
}
}
[/code]

Test Registration Macro

[code language=”cpp” firstline=”22″]
// Macro to register a test function
#define TEST(name)
static void name();
static TestRef s_Test_ ## name(#name, __FILE__, __LINE__, name);
static void name()
[/code]

It simply declares the test function prototype, constructs a static test reference object passing the function pointer into the constructor and then declares the first line of the function implementation. Here’s an example of using it:

[code language=”cpp”]
TEST(MyTest)
{
// Your test implementation…
}
[/code]

When the macro is expanded by the preprocessor, it effectively becomes:

[code language=”cpp”]
static void MyTest();
static TestRef s_Test_MyTest("MyTest", "example.cpp", 1, MyTest);
static void MyTest()
{
// Your test implementation…
}
[/code]

I’ve inserted line breaks to make it easier to read.

Test Execution

This isn’t the exact code I’ve used in my sample, but it’s doing pretty much the same thing.

[code language=”cpp” light=”true”]
TestRef * test = s_FirstTest;
while (test != NULL)
{
test->function();
// Report success or failure…
test = test->next;
}
[/code]

Assert Function

In my sample, I’ve just used an assert macro similar to one you’re probably already using in your own code.

[code language=”cpp” firstline=”14″]
// Assert macro for tests
#define TESTASSERT(cond)
do {
if (!(cond)) {
assertHandler(__FILE__, __LINE__, #cond);
}
} while (0);
[/code]

If the assert condition fails, it turns it into a string and passes it along with the current file and line number into an assert handler function to actually report the failure.

This actually isn’t the best example for a unit testing framework as it’s really only testing for a true condition. If you were developing a fully featured framework, you would probably want more assert functions along the lines of ASSERT_EQUALS(actual,expected) and ASSERT_NOTEQUALS(actual,notexpected) so that you can report how the actual result from a test differs from what was expected. Implementing these types of functions isn’t too hard, so I won’t dwell on that now.

Assert Handler

[code language=”cpp” firstline=”242″]
// Handler for failed asserts
void assertHandler(const char * file, int line, const char * message)
{
fprintf(stdout, "nAssert failed (%s:%d):n", file, line);
fprintf(stdout, " %sn", message);
fflush(stdout);

dumpStack(1);

_exit(1);
}
[/code]

The functions reports the location of the failed assert along with the failed condition before dumping a stack trace and exiting. The reason for calling exit is because my framework actually runs tests in a child process separate to the test runner (more on that later). This is also why I’ve used fprintf with the stdout file handle rather than just using printf(). The child and parent processes actually share the same file handles and so I need to be explicit about where my output is going and when buffers are flushed so that I don’t get overlapping test output.

Dumping the Call Stack

For this, I’ve used a feature of glibc which is one of the reasons my sample is written for *nix.

[code language=”cpp” firstline=”221″]
// Dump a stack trace to stdout
void dumpStack(int topFunctionsToSkip)
{
topFunctionsToSkip += 1; // We always want to skip this dumpStack() function
void * array[64];
size_t size = backtrace(array, 64);
backtrace_symbols_fd(array + topFunctionsToSkip, size – topFunctionsToSkip, 1); // Adjust the array pointer to skip n elements at top of stack
}
[/code]

I provide the ability to skip a number of calls at the top of the stack so that the assert and stack dumping functions aren’t reported in the call stack. The call stack is then written to stdout directly.

The function backtrace_symbols_fd() will attempt to resolve function symbols when it outputs the stack trace but it can be a bit hit or miss with getting the names and will be affected by optimisation level. For the most likely chance to get symbols out, you need to compile with the -g option and link with -rdynamic if using gcc. When I compile and run the sample on my Raspberry Pi, I get the following call stack for a failed assert:

Assert failed (main.cpp:89):
    1 == false
./a.out[0x8c44]
./a.out(_Z7runTestP7TestRef+0x70)[0x8ed8]
./a.out(main+0xb8)[0x8d40]
/lib/libc.so.6(__libc_start_main+0x11c)[0x403cc538]

As you can see, it’s managed to find the symbols for some functions but not the one at the very top of the call stack which is where our assert failed. Fortunately, we can use the addr2line tool to look up this function:

pi@raspberrypi:~/Devs/cunit_sample$ addr2line -e a.out -f -s 0x8c44
TestAssertFailed
main.cpp:90

Calling addr2line can become quite tedious, so you might find it worth writing a script (e.g. in Python) to feed stack traces into addr2line if you find yourself needing to do this regularly which is something I’ve done in the past.

Sample Test Function

[code language=”cpp” firstline=”74″]
TEST(TestPass1)
{
int x = 1;
int y = 2;
int z = x + y;
TESTASSERT(z == 3);
}
[/code]

Nothing too shocking there and hopefully very easy to implement.

Handling Fatal Crashes

The first C++ testing framework I wrote had all the tests running in the same process. If everything was working, this wouldn’t be a problem as all test would pass without incident. However, if there was a fatal crash (e.g. attempting to use a null pointer), the entire test application would crash halting all the tests making it very difficult to assess the overall code health. This can be resolved by signal handlers that wait for crash conditions and attempt to gracefully clear up so that the test runner can keep on running. However, I still ran into bugs that could screw up the stack or heap in fatal ways leaving me no better off in these situations.

In this sample framework, I’ve borrowed an idea from Google Chrome in that I run each test in its own process. This way a test can mess up its own process as much as it wants and it’s completely isolated from any of the other tests. It also enforces good practice with your tests as you can’t have one test depending on the side effects of another test. Each test is completely independent and can be guaranteed to run in any order which makes them much easier to debug. In addition, it makes my crash handling code much simpler as I don’t need to do any more than report the error and exit the process. Simpler code is good in my opinion.

Signal Handler

[code language=”cpp” firstline=”230″]
// Handler for exception signals
void crashHandler(int sig)
{

fprintf(stdout, "nGot signal %d in crash handlern", sig);
fflush(stdout);

dumpStack(1);

_exit(1);
}
[/code]

The handler uses the same stack dumping code as the assert handler and exits with a non-zero exit code to notify the parent test runner application that the test has failed.

The handler is registered with the following code in main():

[code language=”cpp” firstline=”104″]
// Register crash handlers
signal(SIGFPE, crashHandler);
signal(SIGILL, crashHandler);
signal(SIGSEGV, crashHandler);
[/code]

Here, I’ve used the antiquated signal() interface when really, I should be using sigaction(). I’ve probably not registered all the signals that could indicate a fatal code bug either. This is something I may address in the future, but for now it provides a simple example of what I’m trying to achieve.

Spawning the Child Test Process

For simplicity, my test runner just forks itself as that’s one of the easiest way to launch a child process on *nix. It also has the advantage of not needing to do much configuration in the child process in order to run the test.

I’ve wrapped the forking and running of a single test in a function to keep all the logic in one place:

[code language=”cpp” firstline=”156″]
// Wrapper function to run the test in a child process
bool runTest(TestRef * test)
{
// Fork the process, the test will actually be run by the child process
pid_t pid = fork();

switch (pid)
{
case -1:
fprintf(stderr, "Failed to spawn child process, %dn", errno);
exit(1); // No point running any further tests

case 0:
// We’re in the child process so run the test
test->function();
exit(0); // Test passed, so exit the child with a success code

default:{
// Parent process, wait for the child to exit
int stat_val;
pid_t child_pid = wait(&stat_val);

if (WIFEXITED(stat_val))
{
// Child exited normally so check the return code
if (WEXITSTATUS(stat_val) == 0)
{
// Test passed
return true;
}
else
{
// Test failed
return false;
}
}
else
{
// Child process crashed in a way we couldn’t handle!
fprintf(stdout, "Child exited abnormally!n");
return false;
}

break;}
}
}
[/code]

After the process is forked, the child process calls the test function referenced in the passed in TestRef object. If the function completes without incident, the child exits with a zero exit code to indicate success. The parent process waits for the child process to exit and then logs success or failure of the test based on the exit code of the child process.

The main test runner loop is:

[code language=”cpp” firstline=”111″]
int testCount = 0;
int testPasses = 0;

// Loop round all the tests in the linked list
TestRef * test = s_FirstTest;
while (test != NULL)
{
// Print out the name of the test we’re about to run
fprintf(stdout, "%s:%s… ", test->module, test->name);
fflush(stdout);

testCount++;

bool passed = runTest(test);
if (passed == true)
{
testPasses++;
fprintf(stdout, "Okn");
}
else
{
fprintf(stdout, "FAILEDn");
}

// Get the next test and loop again
test = test->next;
}
[/code]

Plugging Into Build Server

This is just a case of following the Unix principal of your process returning 0 if you’re happy or non-zero if not. In my main() function, I keep a count of the number of tests run and the number of tests passed. I then have the following at the end of main():

[code language=”cpp” firstline=”139″]
// Print out final report
int exitCode;
if (testPasses == testCount)
{
fprintf(stdout, "n*** TEST SUCCESS ***n");
exitCode = 0;
}
else
{
fprintf(stdout, "n*** TEST FAILED ***n");
exitCode = 1;
}
fprintf(stdout, "%d/%d Tests Passedn", testPasses, testCount);

return exitCode;
[/code]

Pretty much every build server has the ability to launch external processes as part of a build and report a build failure if that process doesn’t exit with a zero code. It’s just a case of building your test framework as part of your normal build process and then executing it as a post build step. Everyone should be doing it!

Other Platforms and Future Improvements

As I mentioned earlier, this sample will only work on *nix platforms. However with a bit of work, most of these ideas can be ported to other platforms.

Call Stack Dumping

Although there is no standard way to get a stack trace, it’s been possible on every platform I’ve used so far. Some platforms being easier than others though.

For Windows, here’s one example, There also the CaptureStackBackTrace() API in the Windows API.

Fatal Exception Handling

I’ve already mentioned that I should switch to using sigaction() rather than signal() for registering my crash handlers.

On Windows, you could use Structured Exception Handling (SEH) to detect access violations and other fatal errors. Here’s a Stack Overflow question that covers some of the pros and cons of SEH. This is something that’s always going to be very platform specific so you may have to research this for yourself if you’re using something a bit more esoteric.

Child Process Spawning

This is one area I could put a lot more effort in. Currently, I’m only using fork() which isn’t available on all platforms and only gives me limited control over the child process. If instead I launched the child processes as completely separate processes that I attached to using stdout/stderr and specified which test to run using command line arguments, I’d have a much more portable solution. It would make debugging individual tests much easier as I could launch the test process directly from my debugger without needing to run through a complete test cycle. This would also give me more options over how I implemented my test runner as I could develop a GUI application in a completely different language if I wanted or implement distributed tests across multiple machines if my test cycle could take a long time. Finally, reading test output such as failed assertions and call stacks from stdout of the child process rather than letting the child write directly to stdout of the parent process would allow the test runner to present the output in a much nicer way or redirect it to a separate log file that only contained info about failed tests.

If I were to develop this sample further, this is an area I would certainly put more effort into.

More Restrictive Platforms

A few platforms I’ve worked on have only supported running a single process at a time. Launching another process results in the current running process being unloaded and completely replaced by the child process. This makes running tests in a background child process completely impossible. In these situations, I’d have the runTest() function run the test directly in the current process. The assert and crash handlers would also need to be updated to return control to the test runner in the case of a test failure. Your best bet would be to use normal C++ exceptions for this, but if you really don’t want to use them, you could use setjmp()/longjmp(). Which ever way you go, fatal application crashes are likely to halt your test cycles early.

If possible, I’d try to get the code I was testing to also compile on another platform such as Windows or Linux and perform most of my testing there. If you get to the point where all your tests are passing, running the tests on your target platform should just be a formality to make sure the platform specific parts of you code are also working.

Before/After Method Events

Something that I haven’t implemented in this sample but would be very easy to add would be before and after method events so that common set up and tear down code could be automatically called by the test runner. This is a standard feature of just about every other framework, so I wouldn’t consider a framework I wrote complete without it.

Debugging JMeter Plug-ins

When developing your own plugins, you’ll no doubt need to debug some sort of problem sooner or later. Writing log messages or even just printing directly to stdout will often give you the info you need, but eventually, you’ll want to hook up a debugger and step through your code. Fortunately, this is very easy and just needs the following lines added to the top of your start up scripts:

Linux/Mac – bin/jmeter.sh

JVM_ARGS="-Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8000"

Windows – bin/jmeter.bat

set JVM_ARGS=-Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8000

Now when you launch JMeter using either of these scripts, the JVM will start up with the debugger enabled and listening on port 8000. You can then use your IDE of choice to debug your plug-in. In my case, I’ve used Eclipse. To set up the debugger connection, select Debug Configurations… from the Run menu and you should see a dialog similar to the following:

The important things to note here is that you want to add a new Remote Java Application debug configuration and the project should be set to your plug-in project. Optionally, you might also want to add this configuration as a favourite to your debug menu which can be done on Common tab.

When you launch this configuration, the Eclipse debugger will establish a connection to your running JMeter instance and you will then be able to set break points and step through your code as normal. As an added bonus, if you’ve also downloaded the JMeter source code, you’ll be able to attach it to the JMeter Jars in your project allowing to step through the JMeter code as well without having to build it yourself. This is a great way to learn how JMeter works internally which may help you better understand any problems you see with you plug-ins.

Faster JMeter If Controllers

When load testing to a large scale, it’s just as important for your test scripts to be optimized as it is for your servers. Your tests will often need to simulate thousands of users on a single physical box running JMeter and so you need to be running as efficiently as possible. If you are using the standard If Controller in your test with their default configuration, then it’s likely that you’re burning CPU cycles and resources just to check if two variables are the same.

By default, the If Controller will use JavaScript to evaluate the condition you’ve specified. This means that JMeter will create a brand new JavaScript execution context each time an If Controller is processed. This happens for every controller in every thread. That will give your garbage collector a good work out as well as your servers!

Fortunately, the JMeter team knew this could be a problem and so added the option evaluate the condition as a variable expression. This allows you to create a JMeter function to check your condition that returns either true or false. This is tens to hundreds of times faster.

Creating custom functions is even easier than creating a test component. You just need to add the class file to your plugin project and JMeter will automatically find it and do the rest. Here’s another article that goes into the specifics of JMeter functions a bit more and you can also read my original guide on creating custom components.

Common Checks

The most common checks I perform in my scripts are to test if a variable is equal to some value. Using a JavaScript condition, you would specify something like ${value} == 1. However, for my tests, I’ve created my own __eq() function for quickly comparing two values:

[code language=”java”]
package org.adesquared.jmeter.functions;

import java.util.ArrayList;
import java.util.Collection;
import java.util.Iterator;
import java.util.List;

import org.apache.jmeter.engine.util.CompoundVariable;
import org.apache.jmeter.functions.AbstractFunction;
import org.apache.jmeter.functions.InvalidVariableException;
import org.apache.jmeter.samplers.SampleResult;
import org.apache.jmeter.samplers.Sampler;

public class EqFunction extends AbstractFunction {

private final static String NAME = "__eq";
private final static ArrayList<String> DESC = new ArrayList<String>(2);

private CompoundVariable a;
private CompoundVariable b;

static {
DESC.add("a");
DESC.add("b");
}

@Override
public String getReferenceKey() {
return NAME;
}

public List<String> getArgumentDesc() {
return DESC;
}

@Override
public String execute(SampleResult previousResult, Sampler currentSampler) throws InvalidVariableException {

String a = this.a.execute();
String b = this.b.execute();

return a.equals(b) ? "true" : "false";

}

@Override
public void setParameters(Collection<CompoundVariable> parameters) throws InvalidVariableException {

if (parameters.size() < 2) throw new InvalidVariableException("Not enough parameters for " + NAME);

Iterator<CompoundVariable> it = parameters.iterator();
this.a = it.next();
this.b = it.next();

}

}
[/code]

Using that, I can enable the Interpret Condition as Variable Expression option in the If Controller and change the condition to ${__eq(${value},1)}. That will give me exactly the same functionality as before but without JMeter creating a new JavaScript context each time.

How much faster?

To test how much faster, I put together a small script that had a 100,000 iteration Loop Controller that contained an If Controller that contained a Test Action Sampler (a sampler that will do nothing in this case). I defined the variable value to have the value 1 in the test. You can download the test script from here.

When the test is executed using JavaScript to interpret the condition, it takes 78 seconds to complete the test. Changing the condition to a variable expression using my new __eq() function, the test took less than 2 seconds. Both tests ran using the server VM.

I know which one I’m going to continue to use in the future!

Bonus Not Equals Function

Of course I also often need to check if two values aren’t equal, for those, I use this function:

[code language=”java”]
package org.adesquared.jmeter.functions;

import java.util.ArrayList;
import java.util.Collection;
import java.util.Iterator;
import java.util.List;

import org.apache.jmeter.engine.util.CompoundVariable;
import org.apache.jmeter.functions.AbstractFunction;
import org.apache.jmeter.functions.InvalidVariableException;
import org.apache.jmeter.samplers.SampleResult;
import org.apache.jmeter.samplers.Sampler;

public class NeqFunction extends AbstractFunction {

private final static String NAME = "__neq";
private final static ArrayList<String> DESC = new ArrayList<String>(2);

private CompoundVariable a;
private CompoundVariable b;

static {
DESC.add("a");
DESC.add("b");
}

@Override
public String getReferenceKey() {
return NAME;
}

public List<String> getArgumentDesc() {
return DESC;
}

@Override
public String execute(SampleResult previousResult, Sampler currentSampler) throws InvalidVariableException {

String a = this.a.execute();
String b = this.b.execute();

return a.equals(b) ? "false" : "true";

}

@Override
public void setParameters(Collection<CompoundVariable> parameters) throws InvalidVariableException {

if (parameters.size() < 2) throw new InvalidVariableException("Not enough parameters for " + NAME);

Iterator<CompoundVariable> it = parameters.iterator();
this.a = it.next();
this.b = it.next();

}

}
[/code]

Using JMeter’s Table Editor

Following on from my previous post about creating custom JMeter components, I thought it worth taking a look at using the test bean table editor as it’s a good way of editing lists for your own components. Unfortunately, it doesn’t have much documentation and only actually works due to type erasure!

I’ve put together another sample that uses the table editor to show how it works. This time, I’ve created a config element that resets variables on each loop iteration. This is different to the User Defined Variable components as that will only set the variables once.

Table Editor Class Overview

The table editor can be found in the org.apache.jmeter.testbeans.gui package along with a number of other additional property editors. It will allow you to edit properties that are set using a list of classes (e.g. a list of key/value pairs) without having to try to create a complex interface using the standard text boxes. An example of using a table is the User Defined Variables config element (although this doesn’t actually use the table editor that’s available to test beans).

To use the table editor, you must define a class the represents a row of data in the table. This class must have a public, zero argument constructor and get/set property functions for the columns you want to edit. In addition to this, the class must also extend the AbstractTestElement class. This is what caused me a lot of problems initially as it’s not mentioned in the docs and although it may appear to work if you don’t do this, you will run into problems such as breaking the JMeter GUI or not being able to save your tests.

Once you’ve extended that class, you must also make sure that you save any of your properties in the AbstractTestElement properties map otherwise they won’t be saved to the JMX file or passed to your components correctly when you run your test. In my table editor sample, I used the following class for my row data:

[code language=”java” firstline=”23″]
// A class to contain a variable name and value.
// This class *MUST* extend AbstractTestElement otherwise all sorts of random things will break.
public static class VariableSetting extends AbstractTestElement {

private static final long serialVersionUID = 5456773306165856817L;
private static final String VALUE = "VariableSetting.Value";

/*
* We use the getName()/setName() property from the super class.
*/

public void setValue(String value) {
// Our property values must be stored in the super class’s or they won’t be saved to the JMX file correctly.
setProperty(VALUE, value);
}

public String getValue() {
return getPropertyAsString(VALUE);
}

}

[/code]

My property functions in my component are:

[code language=”java” firstline=”47″]
// Our variable list property
private List<VariableSetting> settings;

public void setVariableSettings(List<VariableSetting> settings) {
this.settings = settings;
}

public List<VariableSetting> getVariableSettings() {
return this.settings;
}
[/code]

Configuring the table editor

The table editor is configured through property descriptor variables in your bean info class and so is really easy to set up. The only additional work you need to do is manually request the localised strings for your column headers from your resource file. Here’s my bean info file as an example:

[code language=”java”]
package org.adesquared.jmeter.config;

import java.beans.PropertyDescriptor;
import java.util.ArrayList;
import java.util.ResourceBundle;

import org.adesquared.jmeter.config.ResetVariablesConfig.VariableSetting;
import org.apache.jmeter.testbeans.BeanInfoSupport;
import org.apache.jmeter.testbeans.gui.TableEditor;

public class ResetVariablesConfigBeanInfo extends BeanInfoSupport {

private final static String VARIABLE_SETTINGS = "variableSettings";
private final static String HEADER_NAME = "header.name";
private final static String HEADER_VALUE = "header.value";

private final static ArrayList<VariableSetting> EMPTY_LIST = new ArrayList<VariableSetting>();

public ResetVariablesConfigBeanInfo() {

super(ResetVariablesConfig.class);

// Get the resource bundle for this component. We need to do this so that we can look up the table header localisations
ResourceBundle rb = (ResourceBundle) getBeanDescriptor().getValue(RESOURCE_BUNDLE);

PropertyDescriptor p;

p = property(VARIABLE_SETTINGS);
p.setValue(NOT_UNDEFINED, Boolean.TRUE);
p.setValue(DEFAULT, EMPTY_LIST);

// Set this property to be edited by the TableEditor
p.setPropertyEditorClass(TableEditor.class);
// Set the class that represents a row in the table
p.setValue(TableEditor.CLASSNAME, VariableSetting.class.getName());
// Set the properties for each column
p.setValue(TableEditor.OBJECT_PROPERTIES, new String[] {
"name",
"value"
});
// Set the table header display strings
// These must be read directly from the resource bundle if you want to localise them
p.setValue(TableEditor.HEADERS, new String[] {
rb.getString(HEADER_NAME),
rb.getString(HEADER_VALUE)
});

}
}
[/code]

And how the table editor shows up in the JMeter GUI:

And once again, you can see how easy it is to extend JMeter with your own components when you know how.

A final note on type erasure

As I mentioned earlier, the table editor only works due to type erasure and I wanted to look at it quickly as it’s something that catches quite a few people out. In Java, type erasure is the loss of generic type specification at runtime. A List<String> becomes a plain old List. This means that at runtime, it is possible to assign a List<String> reference to a List<HashMap<Integer, Boolean>> reference without any problems until you try to get a value from the list. Often, the compiler can spot these problems and will report an error, but as with anything, it’s possible to trick the compiler.

Here’s the code from JMeter that takes advantage of this:

[code language=”java” firstline=”285″ highlight=”299″]
/**
* Convert a collection of objects into JMeterProperty objects.
*
* @param coll Collection of any type of object
* @return Collection of JMeterProperty objects
*/
protected Collection&lt;JMeterProperty&gt; normalizeList(Collection&lt;?&gt; coll) {
if (coll.isEmpty()) {
@SuppressWarnings(&quot;unchecked&quot;) // empty collection
Collection&lt;JMeterProperty&gt; okColl = (Collection&lt;JMeterProperty&gt;) coll;
return okColl;
}
try {
@SuppressWarnings(&quot;unchecked&quot;) // empty collection
Collection&lt;JMeterProperty&gt; newColl = coll.getClass().newInstance();
for (Object item : coll) {
newColl.add(convertObject(item));
}
return newColl;
} catch (Exception e) {// should not happen
log.error(&quot;Cannot create copy of &quot;+coll.getClass().getName(),e);
return null;
}
}
[/code]

What this function is doing is accepting a collection of any type and converting it to a collection of JMeterProperty objects. However, on line 299, a new collection is created using the class of the collection that is passed in. This new collection is then assigned to a Collection<JMeterProperty> reference. When using the table editor, the collection passed into this function will be of whatever type you have used for your table row data (Collection<VariableSetting> in the case of my sample). The compiler has obviously picked up on creating and assigning an object this way as being unsafe which is why the @SuppressWarnings annotation has been added.

However, this isn’t a mistake or bug in the code, this is quite intentional and a clever way of constructing a new collection with the same implementation as the input collection. Collection<E> is just an interface and so you can’t construct a collection object directly. Instead, you must construct something like a HashSet or an ArrayList, objects that implement the Collection interface. The function above has decided not to make any assumption about which implementation the normalised collection should use and instead constructs a new collection using the input collection’s class.

So, if the input collection was of type ArrayList<String>, the collection that is returned will be ArrayList<JMeterProperty>, the same implementation but storing a different object type.