Wednesday, May 27, 2015

Graphs for Jmeter using Elasticsearch and Kibana

Disclaimer : I have just done some initial tests with Elasticsearch (Thank you Google) - I have no production experience with it and I have no idea how to set that up. It's possible the example below has really basic mistakes (in which case , please do point them out!). Note also that the easiest way to integrate this is to configure the JMeter result to be a CSV file and simply import that using LogStash - but then I dont have a blog post to write.

JMeter 2.13 came out with some added support for a BackendListener i.e. a way to send your results to some store where you can analyze the test results and draw pretty pictures for the powers that be. This was always possible using a custom listener , but the person writing it would have to make it perform reasonably well. The new Listener runs on its own thread with a Buffer so it makes it somewhat easier to implement. I also stumbled upon the LogStash + Elasticsearch + Kibana family while solving some log parsing problem and so here is an attempt to integrate that with JMeter.

Pre Requisities - This assumes you have downloaded Elasticsearch and Kibana and have managed to get them running and connected to each other.
I used Elasticsearch v1.5.2 and Kibanav4.0.2

Step 1) Define the mapping for the data that we are going to send to ElasticSearch
This step is strictly speaking , not necessary - Elasticsearch can dynamically add indexes or types or fields for a type - More on why I needed to do this later.
For now everything( referred to as a  document) we store in Elasticsearch goes into an Index and has a type. The type is similar to a Class definition - it defines what fields we store , whether they are Strings or Numbers or Dates and what we expect Elasticsearch to do with it (index, analyze, store etc).
In our example, we will store our data into indexes whose name will always be jmeter-elasticsearch-yyyy-MM-dd (I appended the timestamp because Logstash does it by default - This limits the size of a particular  index and that's probably good when you have many tests and distribute Elasticsearch over multiple nodes)
We will define a type called SampleResult that will closely match the JMeter java object SampleResult. We don't really need all the fields , and if we wanted to optimise the interface , we'd probably drop a few. For now we'll just take almost everything other than the response data.

Elasticsearch allows you to define an Index Template , so that all created indexes that use that template have the same setting , so we'll use that feature

Here is the template we will use
   "template" : "jmeter*",
   "mappings" : {
            "SampleResult": {
                          "properties":    {

We indicate that our template should apply to any index whose name begins with jmeter*
We also define a mapping called SampleResult and we define what properties it uses and their types. We also define some properties to be "not_analyzed" - Mainly this field should be indexed and searchable but no analysis is needed on this field. If I didn't set this value then some of the URLs in my example use '-' which is generally a word separator for Search tools (like Lucene which is what Elasticsearch uses) and that was messing up my graphs. This is the main reason for me to define the mapping explicitly (especially as Elasticsearch doesnt allow you to change this value once defined , short of deleting and recreating)

We now need to tell Elasticsearch about this template. Thats done by making a HTTP PUT request to a URL. I use Poster for this purpose. Fill in the server and (http) port used by Elasticsearch and paste the template into it and PUT.

The server should acknowledge it.
You can view that all is successful by typing http://[elasticsearchserver]:[httpport]/_template

Step 2 : Implement JMeter's BackendListenerClient
In order to get JMeter to pass our code the results , we need to implement the BackendListenerClient interface. This has following methods
setupTest/teardownTest - As the name suggests these are initialization/cleanup methods . They are called once each time the test is run.
getDefaultParameters - These are useful to hint things to the end user (e.g. the parameter name for the Elasticsearch server and a default value for it like 'localhost')
handleSampleResults - This is the method JMeter will invoke to provide the results of the tests. A List of SampleResult is provided as input , the size of the list will probably be upto the Async Queue Size field that is specified when the Listener is configured
createSampleResult - This allows you to create a copy of the SampleResult so that if you modify it in some way , it doesn't change anything for the rest of the Listeners.

The javadoc for BackendListenerClient helpfully tell you to extend AbstractBackendListenerClient instead of implementing BackendListenerClient directly -so that is what we will do. This class has a few additional methods (and since I haven't been able to figure out what they do , we will simply pretend they don't exist). We can also download the Java Source code and read the existing implementation for GraphiteBackendListenerClient (basically copy-paste as much as we can - a Java development best practice)

So with that here are some snippets - The complete source / project / jar is at [1]
    public Arguments getDefaultParameters() {
        Arguments arguments = new Arguments();
        arguments.addArgument("elasticsearchCluster", "localhost:" + DEFAULT_ELASTICSEARCH_PORT);
        arguments.addArgument("indexName", "jmeter-elasticsearch");
        arguments.addArgument("sampleType", "SampleResult");
        arguments.addArgument("dateTimeAppendFormat", "-yyyy-MM-DD");
        arguments.addArgument("normalizedTime","2015-01-01 00:00:00.000-00:00");
        arguments.addArgument("runId", "${__UUID()}");
        return arguments;
We define some parameters that we will use. For e.g. the elasticsearchCluster element will define which Elasticsearch server(s) we need to connect to. The indexName will define the prefix used to create the index (remember that our template will get applied to all indexes that begin with "jmeter". The dateTimeAppendFormat will be a suffix we use to append to the indexName. We can ignore the other parameters for now

Next by referring to the Java API for Elasticsearch we can write some code to connect and disconnect from the cluster in the setup and teardown methods.
public void setupTest(BackendListenerContext context) throws Exception {
        String elasticsearchCluster = context.getParameter("elasticsearchCluster");        
        String[] servers = elasticsearchCluster.split(",");
        indexName = context.getParameter("indexName");
        if(dateTimeAppendFormat!=null && dateTimeAppendFormat.trim().equals("")) {
            dateTimeAppendFormat = null;
        sampleType = context.getParameter("sampleType");
    client = new TransportClient();
    for(String serverPort: servers) {
        String[] serverAndPort = serverPort.split(":");
        if(serverAndPort.length == 2) {
            port = Integer.parseInt(serverAndPort[1]);
        ((TransportClient)client).addTransportAddress(new InetSocketTransportAddress(serverAndPort[0], port));

public void teardownTest(BackendListenerContext context) throws Exception {

The setupTest code reads the elasticsearchCluster parameter (this would get set in the JMeter GUI by the user), parses it , sets the apropriate values to the Transport client (note that the JAVA API port - default 9300 is different from the HTTP API port - default 9200). The teardownTest shuts down the client.

And with that we get to the bulk of our code (again this code might be inefficient)
First lets create a Map from a JMeter Sample result object. Elasticsearch can treat a Map object as if it was a JSON object.
private Map<String, Object> getMap(SampleResult result) {
    Map<String, Object> map = new HashMap<String, Object>();
    map.put("ResponseTime", result.getTime());
    map.put("ResponseCode", result.getResponseCode());
        //snip lots more code
    return map;

Next lets use the API to create a document
public void handleSampleResults(List<SampleResult> results,
        BackendListenerContext context) {
    String indexNameToUse = indexName;
    for(SampleResult result : results) {
        Map<String,Object> jsonObject = getMap(result);
        if(dateTimeAppendFormat != null) {
            SimpleDateFormat sdf = new SimpleDateFormat(dateTimeAppendFormat);
            indexNameToUse = indexName + sdf.format(jsonObject.get(TIMESTAMP));
        client.prepareIndex(indexNameToUse, sampleType).setSource(jsonObject).execute().actionGet();                    

We simply write code to dynamically create an index name based on the timestamp of the SampleResult object and we tell Elasticsearch to prepare the Index (which it will create if its not existing, then pass in the Map object(the document we want to store) that we just created and execute this - i.e. add this document to the index.

Next we compile and create a JAR file for our code and put it in $JMETER_HOME/lib/ext.
Dependencies are uploaded in .classpath file

We also need to add the dependent jars for JMeter (Elasticsearch API , Lucene) into $JMETER_HOME/lib. Copy them from the Elasticsearch installation.

Step 3: Modify the JMeter test to use the Listener we created
We should be able to add the BackendListener like any other Listener to our JMeter plan (Add --Listener -- BackendListener)
In the Backend Listener Implementation dropdown , we should get the class we created in the drop down and selecting it should fill the Parameters section with the parameters we returned in the getDefaultParameters method. If not then we need to check the logs, see that we compiled our code with the same version that we are running JMeter on or whether we copied the JAR to the right place, whether we copied the dependencies etc). Next we update the value for elasticsearchCluster parameter to point to our server and we can then run the test.

Lets assume we are running the test and the Log isn't printing error after error.
Elasticsearch HTTP API will allow you to see data - but we can also use a plugin like Elasticsearch Head . Install this plugin. Next we navigate to http://server:httpport/_plugin/head/ and we can see if our index is created and whether it is getting data. Looks like it is

Step 4 : Looking at the data in Kibana
If Kibana isn't started already , start it

Navigate to the Kibana Server :Port in the browser and click Settings and the Indices tab
Our indices are named jmeter so thats what we will give in the index pattern field. Ensure that Index contains time based events is checked and in the drop down for Time-Field name choose timestamp

On the next screen click the star icon to make this index pattern the default - We can also see the fields that match what we defined in our template.

Now click Discover - If you just finished running your test or it's still running , you should be able to see some data. Kibana's default is Last 15 minutes , but you can change that by clicking the top right hand corner. Note that the index selected is jmeter* - which is what we want . Change it if its something else by clicking the settings on upper right hand side

Lets try and see 90% response times. Click Visualise. It asks you what type of chart you want - lets use a Line chart and a new Search. When prompted use jmeter* for the index pattern.

So on the Left hand side - we can choose for the Y axis , the aggregation is "percentile" , the field as "Response Time" and value for the Percentile as 90. For X axis we choose Aggregration as Date Histogram and the field as Timestamp and the Interval as Auto.
And apply .

And we see a nice graph for the 90% across all Samples - Nice but not very useful.

But thats  ok - we can also filter data by typing in queries - In the example below , only use the Category pages(Where the JMeter Sample Label is "Request Category Page" to calculate the percentiles.

Lets try and drill it down further. In my example JMeter script , it iterated over a number of URLs (category pages) and the JMeter Request had a common label "Request Category Page". Lets see the top 3 worst performing categories. The Y Axis remains the same. For the buckets we first choose Split Line and for the  Aggregation we use "Terms" for Field we choose URL (as each Category has its own URL) and we choose  the Top 3 URLs ordered by 90%  and then we add a X axis Sub Aggregation of Date Histogram on the timestamp field.

You can even save this visualization and add it to a Dashboard.
Once the data is in Elasticsearch ,all the features within Elasticsearch and Kibana are available.

Lets look at a couple of more involved examples

Saving variables
Lets assume that one of the steps in the test script is a Search script. The script reads a set of terms from a file into say a variable (sku) and executes a Search transaction. However the application we are testing has some issues. If a search term is a complex phrase, returns too many results or has too many rules defined for it (Synonyms, etc) then it takes time. We want our results to be analyzed either by Search as a whole or by the specific terms that we searched for. One option is to modify The Sample label to be something like Search - ${SKU} and use wildcards whenever we want to see Search etc. But it would be cleaner if we could still have the SampleLabel as Search and somehow save the variable "sku" and its value as part of the document we index.
A normal listener has access to the vars object or can get the variables from the Thread. But the BackendListener interface only has a SampleResult and no access to the original thread due to the Async nature. An option would be to modify the SampleResult (by a different listener) to add these variables into it (in a Sub SampleResult or something). But in this example since I only want a single variable , lets just use a poor mans delimiter with SampleLabel.
So what we will do is Change the SampleLabel to be something like
a) We will tell Elasticsearch about this field in the mapping (well actually we already did it - If you look at the mapping we specified we already had a "sku" field defined  , a not_analyzed string
b) We then modify our JMeter script to change the SampleLabel

c) In our Listener we change our code to parse the SampleLabel and set the values correctly in the map method , where we transform the JMeter result into a Map object
String[] sampleLabels = result.getSampleLabel().split(VAR_DELIMITER);
map.put("SampleLabel", sampleLabels[0]);
for(int i=1;i<sampleLabels.length;i++) {
        String[] varNameAndValue =sampleLabels[i].split(VALUE_DELIMITER);
        map.put(varNameAndValue[0], varNameAndValue[1]);

And thats it (well compile , copy the Listener jar and restart JMeter). Run the JMeter script and we should start seeing data. Heres the 90% graph for Search in General - Y Axis is aggregated by Percentile and X axis is a Split Line on a TermsAggregation on Field SampleLabel sub aggregated by a Date Histogram on timestamp

And heres the same thing for individual SKU -Y Axis is aggregated by Percentile and X axis is a Split Line on a TermsAggregation on Field sku sub aggregated by a Date Histogram on timestamp

Any variable can be done in the same way (or multiple variables) using ~ as a separator and key=value for the variable..

Comparing runs
A common performance issue is you have a set of data and you want to make a change and compare the runs before and after the change. Lets try and compare graphs. I actually couldnt figure out how to do this in Kibana so lets do a quick and dirty fix -
1) We'll settle on an epoch date . Lets say 2015-01-01 00:00:00. No matter what time we started our test actually, we are going to pretend we started it on 2015-01-01 00:00:00. For all requests that are part of the scripts , we are going to shift the time so that the times are all relative to 2015-01-01 00:00:00.
2) Because of 1) , we now don't actually have to compare different timeseries , all the results will appear to be on the same time frame. However we do need a way to differentiate the two runs. So we'll introduce another variable and call it runId. This will just need to be something unique so that we can differentiate the runs

a) We add these new fields to the mapping for Elasticsearch(well we did it already) - the fields are NormalizedTimestamp and RunId
b) We modify the java code to add these parameters and default values - that was done in the first code snippet too - normalizedTimestamp and runId
c) Next we modify the Listener code to add this new timestamp
We calculate the offset of the startTime from the epoch
String normalizedTime = context.getParameter("normalizedTime");
if(normalizedTime != null && normalizedTime.trim().length() > 0 ){
    SimpleDateFormat sdf2 = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSSX");
    Date d = sdf2.parse(normalizedTime);
    long normalizedDate = d.getTime();
    Date now = new Date();
    offset = now.getTime() - normalizedDate;

And then we simply subtract this offset from the current time in the mapping code
map.put("NormalizedTimestamp", new Date(result.getTimeStamp() - offset));

And now we can run our tests (after compile, copy jar and restart JMeter). So lets run the test once  and then run it again (after an interval), preferably make some changes (for e.g. I turned off cache and then turned it back on
On the Kibana homepage , if we change the interval to cover both the runs , we can see that we do have some data for two separate runs . (We could note down the RunIds and just filter for them - This  would be needed if we had run tests in between - but we didnt so we are good)

Next go back to Visualize and this time the Y Axis can still be a 90 Percentile Aggregation of Response time - Then use a Split Lines over a Terms Aggregation of RunId , A Split Chart over a Terms Aggregation of SampleLabel , and finally an X Axis date histogram of NormalizedTimestamp . Note that the time filter still operates on the timestamp field so we have to ensure that it covers the actual times of both runs

[1] Source code and Jar available at!953
Note JAR built with Java 1.8

Thursday, January 02, 2014

Modifying JMeter scripts programatically

Problem : Is there any scripting or GUI way to massively change from WebService(SOAP)
Request (DEPRECATED) to SOAP/XML-RCP Request?

Solution : The JMeter script file is an actual XML file so any solution that manipulates XML can be used to solve the above problem.  As Michael Kay's excellent books on XSLT were staring at me , the way I implemented this was in XSLT.
The first thing we need to do was use what is called an identity transform
<xsl:template match="@*|node()">
  <xsl:apply-templates select="@*|node()" /> 

i.e. essentially copy the source XML to the destination. With that done we can now get to the problem at hand - which is intercept all the deprecated requests and transform them to the new one

In order to do so we can create a JMeter script which has both of these elements and look at what XML gets generated so that we know the source and destination XML we want
The deprecated XML
 <WebServiceSampler guiclass="WebServiceSamplerGui" testclass="WebServiceSampler" testname="WebService(SOAP) Request (DEPRECATED)" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments">
            <collectionProp name="Arguments.arguments"/>
          <stringProp name="HTTPSampler.domain">server</stringProp>
          <stringProp name="HTTPSampler.port">port</stringProp>
          <stringProp name="HTTPSampler.protocol">http</stringProp>
          <stringProp name="HTTPSampler.path">path</stringProp>
          <stringProp name="WebserviceSampler.wsdl_url"></stringProp>
          <stringProp name="HTTPSampler.method">POST</stringProp>
          <stringProp name="Soap.Action">ws-soapaction</stringProp>
          <stringProp name="HTTPSamper.xml_data">ws-SOAP-DATA</stringProp>
          <stringProp name="WebServiceSampler.xml_data_file">C:\Users\dshetty\Downloads\apache-jmeter-2.9\bin\logkit.xml</stringProp>
          <stringProp name="WebServiceSampler.xml_path_loc"></stringProp>
          <stringProp name="WebserviceSampler.timeout"></stringProp>
          <stringProp name="WebServiceSampler.memory_cache">true</stringProp>
          <stringProp name="WebServiceSampler.read_response">false</stringProp>
          <stringProp name="WebServiceSampler.use_proxy">false</stringProp>
          <stringProp name="WebServiceSampler.proxy_host"></stringProp>
          <stringProp name="WebServiceSampler.proxy_port"></stringProp>
          <stringProp name="TestPlan.comments">comment</stringProp>

The non deprecated SOAP request
        <SoapSampler guiclass="SoapSamplerGui" testclass="SoapSampler" testname="SOAP/XML-RPC Request" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments">
            <collectionProp name="Arguments.arguments"/>
          <stringProp name="SoapSampler.URL_DATA">url</stringProp>
          <stringProp name="HTTPSamper.xml_data">data</stringProp>
          <stringProp name="SoapSampler.xml_data_file">c:/test.xml</stringProp>
          <stringProp name="SoapSampler.SOAP_ACTION">soapaction</stringProp>
          <stringProp name="SoapSampler.SEND_SOAP_ACTION">true</stringProp>
          <boolProp name="HTTPSampler.use_keepalive">false</boolProp>
          <stringProp name="TestPlan.comments">comment</stringProp>

Knowing this , the template is easy
  <xsl:template match="WebServiceSampler ">
    <xsl:element name="SoapSampler">
        <xsl:attribute name="guiclass">SoapSamplerGui</xsl:attribute>
        <xsl:attribute name="testclass">SoapSampler</xsl:attribute>
            <xsl:when test="@testname ='WebService(SOAP) Request (DEPRECATED)' ">
                <xsl:attribute name="testname">SOAP/XML-RPC Request</xsl:attribute>
                <xsl:attribute name="testname"><xsl:value-of select="@testname"/></xsl:attribute>
        <xsl:attribute name="enabled"><xsl:value-of select="@enabled"/></xsl:attribute>
        <elementProp name="HTTPsampler.Arguments" elementType="Arguments">
            <collectionProp name="Arguments.arguments"/>
        <stringProp name="TestPlan.comments"><xsl:value-of select="stringProp[@name='TestPlan.comments']/text()" /></stringProp>
        <stringProp name="SoapSampler.URL_DATA"><xsl:value-of select="stringProp[@name='HTTPSampler.protocol']/text()" />://<xsl:value-of select="stringProp[@name='HTTPSampler.domain']/text()" />:<xsl:value-of select="stringProp[@name='HTTPSampler.port']/text()" /><xsl:value-of select="stringProp[@name='HTTPSampler.path']/text()" /></stringProp>
        <stringProp name="HTTPSamper.xml_data"><xsl:value-of select="stringProp[@name='HTTPSamper.xml_data']/text()" /></stringProp>
        <stringProp name="SoapSampler.xml_data_file"><xsl:value-of select="stringProp[@name='WebServiceSampler.xml_data_file']/text()" /></stringProp>
        <stringProp name="SoapSampler.SOAP_ACTION"><xsl:value-of select="stringProp[@name='Soap.Action']/text()" /></stringProp>
        <stringProp name="SoapSampler.SEND_SOAP_ACTION"><xsl:value-of select="(stringProp[@name='Soap.Action']/text() != '')" /></stringProp>            
        <boolProp name="HTTPSampler.use_keepalive">false</boolProp>        

Where we just map the older elements into their newer forms (wherever possible). So for example we match an element named "WebServiceSampler" and replace it with "SoapSampler" . We transform the testname to either use the default if the deprecated element used the default or we use the name the user specified. We copy attributes and nested elements.
 You can then use ANT or any other way to style your input scripts into newer ones for one or many files
<target name="transformWS">

An alternate approach would have been to generate java code that could parse XML and remove/replace objects but the identity transform mechanism makes this technique hard to beat. Any Object binding mechanism would run into issues when JMeter changes it schema but the XSLT technique is limited to only having to worry about source and destination.
I've also always toyed with the idea of creating a functional suite of tests that user's would  run at will without being limited to the tests defined in the file and this is the approach I would use there as well - more to come.

Example -!892

Monday, April 22, 2013

Simulating Abanadoned flows in JMeter

A user wanted to simulate abandoned flows in JMeter - e.g.a  multi step checkout scenario where some number of users drop off at every step

This was my suggestion

+ThroughPutController (90 , percent executions , uncheck per user = 10% abandon)
++ThroughPutController (80 , percent executions , uncheck per user = 20% more abandon at this step)

Wednesday, March 27, 2013

Sharing session ids across threads in jmeter

Problem i want to use the same JSESSIONID across the test plan which includes several Thread Groups(
Which is actually two problems
a. How do I share data between threads in JMeter?
b. How do I connect to an existing session in JMeter given the session id?
Note : A good question to ask at this stage is do I really , really want to do this? Usually a Jmeter thread maps to actions that a user can take (AJAX aside) - doing something like the above goes away from the JMeter model of things and an alternative to consider is why don't I change my test so that a user's session is only needed for a thread? (mini rant - This is a common scenario in support where there is a problem X , the user thinks that doing A will solve his problem, A needs B to be done , B needs C and C needs D and the user then asks "How do I do D?"without revealing his cards - in this case why is the test plan for a user split across multiple thread groups?) The user should really be asking how do I solve problem X because some person might say why not do Action Y which is simple) .

But because we will have a blog post to write , we will assume the answer to the above question is Yes, I really want to do this and this is the best way to solve the problem I have.

How to share data between threads in JMeter
JMeter's only out of box way to share data between threads is the properties object and the shared namespace in BeanShell[2] both of which have limitations. The properties object needs you to come up with some scheme to share data and also needs you to write your own code for e.g. if you consume data much faster than you can generate it.

One technique is that if ALL the data that is to be shared is to be obtained before the next threads need it then one option is to use the CSV Data Set config - The first set of threads write to the file and the next set of threads read from this file - but essentially any shared memory(e.g. a file, a database etc) will do. However if you need to read and write this data at the same time then another way is to use the standard Java data structures which are thread safe.

In this post we will look at two ways
a. The CSV data set config
b. Using a java object
Note that there is a plugin available which will do b. from Jmeter plugins InterThreadCommunication [1] . If you aren't a programmer then the plugin is probably best for you - though I will say that good testers these days need to have programming and scripting skills.

However we will just write our own for fun. You also might need to do this to implement a complicated scheme.

Before we begin we will need an application to allow us to test whatever we write. So I have deployed a web application which has 2 URLs
a. set.jsp which sets an attribute into the session with the name uuid and the value a random generated value and also returns it as part of its output. Since this is the first page we access it will also create a session and set a cookie.

b. get.jsp which reads the value of the attribute uuid and returns it as its output. If you have the same session and previously accessed set.jsp then you should see the same value as step a.
If you dont have a session(or didnt access set.jsp) then you will see the value as "null"

So let us first write a simple test to verify that all is well. Here is a screen of the test.

We simply access set.jsp. Extract out its response . Then we access get.jsp and assert that the value we extracted out matches whatever is being returned by get.jsp. If the same session is being used for both the requests then the assertion will pass , otherwise it will fail.

Remember that Java web applications allow sessions to be maintained in two ways - either the Session ID is passed as part of the URL or as a cookie (usually named JSESSIONID) or both.
For this example we will assume we are using cookies.

Lets cause the test to fail - A simple way of doing this is by disabling the HTTP Cookie Manager. If we disable this JMeter wont accept cookies and hence wont be able to maintain sessions and every request will be a new session.

So we disable the HTTP Cookie manager and run the test. It fails as we expected it. Lets verify that it's because of the session
a. Note the first request gets a Set-Cookie back from the server - the server is trying to get the client to store the SessionID so that the next request can send the SessionID back

b. Note that the next request does not send a cookie header from JMeter

c. Note that the server responds to this request with another set-cookie (because it will try to create a new session as it didnt get any existing session cookie

Now lets enable the HTTP Cookie Manager. It works!. Lets verify the cookie was passed

Now that we have some degree of confidence that our assertions work (i.e. failures are flagged as failures and successes as successes) lets rerun the test with multiple threads and multiple iterations. All work. We can also see that different set requests get different cookies. This is because a cookie manager is scoped to a single thread (so each thread gets its own cookie and session) and because we have checked  "Clear cookies at each iteration" on the cookie manager.

Sharing Data using Java Objects in realtime
Lets first create a class that will allow data to be shared. For this purpose we use a  LinkedBlockingQueue  [4]. The javadoc for the interface BlockingQueue.
"Note that a BlockingQueue can safely be used with multiple producers and multiple consumers."
Which is a fancy way of saying something can be accessed by multiple threads without the programmer needing a degree in Computer Science. Since we will be reading and writing from different threads in different thread groups , the fact that this data structure natively supports thread safety frees us up from having to implement it - no synchronized keywords are needed in our code
However we do need to store this queue object somewhere , so we simply create a wrapper class that holds on this Queue statically and provide getters and setters. Because we dont know the rate at which we will read/write we simply put a timeout on the get.

import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.TimeUnit;

public class Queue {
    private static LinkedBlockingQueue<Object> q = new LinkedBlockingQueue<Object>();
    public static void put(Object o) throws Exception{
    public static Object get(long timeout) throws Exception{
        return q.poll(timeout, TimeUnit.MILLISECONDS);
    public static void clear() throws Exception{
    public static void main(String [] args) throws Exception{

We run a simple single threaded test on this class to see that everything is fine (it is), we compile this into a jar, place the jar in JMeter's lib directory and our class is ready to be used.
 The test structure is shown in the screen below

a. The setup thread group clears any data that might already be there in the queue. Because we are holding onto the Queue statically and the JMeter GUI is a single JVM , we need to clear out any data that might be there from a previous run. This is the code in the Beanshell sampler.;

b. We create a threadgroup and use a request to create the sessions , extract out the data that we need - the sessionid and the value returned by the page so that we can assert that we are connecting to the same session. We then call the class we have just created to put these values into the queue (as an array with two values) and finally we introduce a random delay values
The regular expression to extract the cookie is in the screenshot above

And here is the BeanShell code to add the cookie and the extracted value to the Queue

String[] data = new String[2];
data[0] = vars.get("jsessionid");
data[1] = vars.get("uuid");;

c. Next we create a threadgroup that is going to use the session ids set from the previous threadgroup using a BeanShell preprocessor. The preprocessor also has the code to add the session id cookie so that requests will connect to the existing session. As before we assumed that the Server manages session in the form of cookies. But we could also have it as part of the URL once we are able to successfully read the SessionID.
We also configure the thread groups to have different number of threads so that there will be some variation in the rate of requests (note that if you have more Sessions created in the "set" threadgroup than you can process in the "get" threadgroup you will eventually run out of memory unless you make the queue bounded in size. If you have more in the "get" threadgroup then they will keep waiting (hence we have a timeout when we ask for a session id)

import org.apache.jmeter.protocol.http.control .CookieManager;
import org.apache.jmeter.protocol.http.control.Cookie;
String[] data = (String[]);
if(data != null) {
    Cookie cookie = new Cookie("JSESSIONID",vars.get("jsessionid"),"localhost","/", false,-1);
    CookieManager manager = sampler.getCookieManager();

The code above simply asks the queue for a value, and will wait upto 60 seconds for it (This only applies if we are consuming the data much faster than we can produce it). Then if we did get a valid value we use the JMeter API's[3] to access the Cookie Manager and add a cookie to it. Note that we do need a Cookie Manager for this to work.  Some values have been hardcoded for simplicity (like the domain name)

We run the test and its all successful! Remember we do have assertions so a success is indeed a success. We can also verify using the view results tree listener and we can also check the cookie is being sent in the GET request.

Cons : Note that anything that needs data to be shared between threads , that has wait and synchronization( irrespective of whether you do it or whether the JVM does) adds an overhead to the test - so the amount of load you can generate from a single JMeter instance will reduce. Also if you need more sophisticated schemes to share data (we used a simple FIFO model) then your code is more complicated , more prone to errors and likely to add more overhead.

Sharing data using files
Using a file is usually not a good candidate for sessionids because sessionids are short lived and may timeout. You also need to have everything available in the file before you can start reading the file. They are more suited to usecases like testing for Remember me or keep me signed in type of cookies. However the flexibility a CSV data set config provides may make it worth your time.
Here is the structure of the test.

The setUp thread group sends a set of requests to create the session and get the values as before.  To write all these values to a file we use a simple data writer and configure JMeter (by modifying to write the two variables we are interested in only. Note that this is probably not a good way to do it - in addition JMeter appends the data to an existing file so either you need to delete the file before you run this or use a property that varies dynamically so that a new file is created.

Remember that the setUp thread group will always execute before any other threadgroup , so this file will be created and have all the data before the next thread group runs.  The Simple data writer unchecks all the checkboxes and we edit to have the following line


The next threadgroup simply reads the file and attaches the session id to the cookie manager.

We can have as many threads as we want (we ensure that clear cookies at every iteration is checked so that every time we request something we use the cookie that is set by the beanshell preprocessor and not the one the cookie manager had previously saved). The Beanshell code to add the cookie is the same as before

import org.apache.jmeter.protocol.http.control .CookieManager;
import org.apache.jmeter.protocol.http.control.Cookie;

Cookie cookie = new Cookie("JSESSIONID",vars.get("jsessionid"),"localhost","/", false,-1);
CookieManager manager = sampler.getCookieManager();
we run the test - all successful. The file generated has the values


The cookies get added to the request

We can change the CSV data set config to keep reading the same file instead of terminating at the end - that works as well. You can also check that if you disable the setup threadgroup and rerun the test , it still works (because the sessions are still active - assuming they havent timed out) - so if the data you are saving in this file is still valid it can be used anytime and anywhere. You can also bounce the server in which the JSP web application was deployed. This causes all the sessions to get invalidated and hence the file to have invalid data. Now if you run the test , all the samples will fail because all the sessions are invalid - so even though we added the session id , the server has no data for this session.

Cons : As mentioned above , this isn't realtime sharing , it relies on you already having the data to be used , and the data being valid.

Files used in this post available at!886

[1] InterThread Communication-
[2] BeanShell Shared namespace -
[3] Cookie Manager -
[4] LinkedBlockingQueue -

Friday, May 25, 2012

JSON in JMeter
Roughly user gets a JSON response and wants to use some of that data for the next JSON request. Using a regular expression to extract out multiple related data is sometimes painful. So the alternative I suggest is to use JMeter to parse the JSON object and then use it (The other would be to have JMeter to natively support JSON , but unless JMeter also has a mapping tool , this only saves two lines of code , so I don't think its that useful).

So first we download Douglas Crockford's reference implementation from JSON in java , compile it and generate a jar and copy it into JMeter's lib directory (please ensure you compile it with a java version that is compatible with whatever you will use to run JMeter) - and don't use this library for evil.

Next launch JMeter.

To spoof this example , we create a BeanShell sampler that will return the JSON string we are interested in. To this sampler we add a BeanShell post processor that will parse the response and form the object that we need for the next request. The code is given below

import org.json.JSONArray;
import org.json.JSONObject;

String jsonString = prev.getResponseDataAsString();
JSONArray equipmentParts = new JSONArray(jsonString);
JSONArray parts = new JSONArray();

for(int i=0;i<equipmentParts.length();i++ ){
    JSONObject equipmentPart = equipmentParts.getJSONObject(i).getJSONObject("equipmentPart");
    JSONObject allAttributes = equipmentPart.getJSONObject("allAttributes");
    JSONObject part = new JSONObject();
    // add more here

vars.put("jsonResponse", parts.toString());
Note we first get the response as a string. Then we just use the JSON libraries to parse this string into a JSON array (equipmentParts). After which we iterate over every object, extract out the attributes we are interested in (you could even have an array of attributes and have a simpler loop) and then we start forming the object we want to post (i.e. the parts JSON array and the part JSON object)

Next we simply convert the JSON parts array into a String to be used by the next sampler by setting it into the jmeter variable jsonResponse.

The next sampler simply uses a POST and only specifies the variable ${jsonResponse} in the value field(So that it will be used as is in the body).
We run the script. The Debug Sampler is used to verify that the value we setup is correct and that it does get posted in the next request

Sample code is available here!876

Wednesday, April 25, 2012


Paraphrased Question on the JMeter mailing list
While working on JMeter I prepared some Test Plans in JMeter and executed
them with different combinations and I put forward the Results,graphs etc.
so please guide me what kind of details/screens/graphs/files should my
presentation include and what should not?

The first Shakespearean to be or not to be dilemma I have when I read the above is should I try to answer this question ?. On one hand I'd like to help the person asking the question, and this is more of a theoretical exercise where I can blab away without fear of consequence or effort needed to actually do what I'm saying. On the other hand the question is so generic that much of what I say will have to have so many caveats to be accurate, there will be a bunch of follow up questions I will have to answer, that my blabbing away might mislead said person because I don't really know what the person wants and as an engineer I hate to speculate without knowing. Perhaps a brusque "what are your requirements?" with an additional dose of the sad state of software testing where people don't know their goals or their requirements , if I'm in bad mood. with a slight guilty conscience since I've been here and done this and asked such questions. Should I feed the person or teach the person to fish? Since the tester seems to be a beginner will my long drawn caveat ridden, opinionated, biased opinion help the tester or make the matter worse?

Any Road.

The single most important thing to know is when you ran the tests what did you want to know about the system? What did you expect to prove? What would you consider a successful test? What would you consider a failure? or in other words What were your requirements?
If you have ever read a technical work on Testing systems , you should know that there are various types of tests and indeed there are even various types of Performance tests. Unless you know this you don't know what type of tests you need to run and you don't know how to interpret the results, indeed you probably don't even know which system to look at.

To illustrate with some examples
a. You want to know , for a specified load, whether all your operations succeed (an operation could be anything like placing an order or searching a value).
b. You want to know, for a specified load, whether your system performs acceptably
c. You want to know, for a specified load, which has been kept on the system for a sufficiently long time, your system shows acceptable characteristics.

You will notice there are a lot of weasel words in the examples above. What is "specified" load and how do we determine that?. What is "performs acceptably" and how do we determine that? What is "acceptable characteristics" and how do we determine that?

And the answer is - it depends , hence the weasel words.
Take specified load for example - How many users? How many of them hitting the site concurrently? What think time?
And answer goes back to "why are you running your tests?".
Suppose you have changed some functionality - say you refactored some code. then usually the specified load is the same as the one you see in production. i.e. you are trying to find out if your change has no impact. Success is determined if the system performs the same or better with your change.
Suppose you are doing capacity planning - Then the specified load is the number of users you expect to access your site (perhaps sometime in the future as your site grows). Success is determined by the response time and other system characteristics
Say you are going live with a new site. Then specified load is the maximum load you expect the site to see. Success is determined by the response time being lower than those specified and system characteristics being acceptable
Say you want to check if your site can take the added shopping load when Thanksgiving arrives. Then specified load is the peak burst you expect to see and so on.

It isn't easy. Most testers don't even think this is part of their job - I guess it isn't. But asking these questions should be.

Now lets complicate it some more. Lets say you like the large majority of testers are interested in the "response time" for some load. Lets break this up into two parts
a. The accuracy of the simulation and
b. The limitation of the tool under test.

a) When you run any load test, you are running a simulation (except in a very rare number of cases). The accuracy of the simulation depends on how good your test script is, but it also depends on your test environment. This is it's own blog post but unless you know your simulation is reasonably accurate or you have safety margins built in , your tests aren't any good. This isn't particularly earth shattering - but how many testers actually validate that their simulation is accurate? Many don't. If you run a JMeter with 500 threads on a single windows machine, with no think time - and you dont get an out of memory error , is your simulation accurate? In most cases the answer would be no. The JMeter machine will not be able to generate the same load as 500 windows machines each running a single user. Having 500 machines is not a practical scenario for most people (perhaps this will change with the cloud) - but the key here is if I can live with some margin of error , what is the number of machines I should have and how do I determine this? The answer is again - it depends. It depends on the application, It depends on your test, It depends on the machine you are running from, It depends on the network etc etc.
One of the ways to validate this is run the test , see some of the values you are getting from Jmeter. Now while the test is running (this is important!) , using some other machine , that is as independent from the Jmeter machine as possible , access the site under test with a browser (or even a single user JMeter thread) and see the value you get. if this is close enough to the average Jmeter value - then great!. If it varies wildly then you might have an issue. Another way is half the load on the Jmeter client machine and add another Jmeter client machine with half the load. Do you see substantial changes in response times? Do you see many more concurrent threads on the server? if so you have a problem. Repeat this till you see no significant  difference. Or you can monitor the jmeter client machine - is memory/cpu pegged at highest value? If so probably the test isn't reliable.

b) Jmeter is not a browser yadda yadda. Do not expect to be able to tell browser render times without some additional tests. Running tests from an intranet is probably not going to give you the same time as when a user from some far off country accesses your site and gets a page of 500KB. Remember to factor in the network (either guesstimate or have different machines with different locations if possible)

Remember however that you can never have 100% accuracy (well perhaps you could - it depends :). An important part of engineering is knowing what tradeoffs to make when you have limited time and resources.

Dynamically changing sampler name in a JMeter test

This came up when a user was reading URL's from a file and needed to extract out a value from the Response that he wanted to use as the sampler name (so that his report would show this information)

The solution is roughly
a. Extract out the comment using regex post processor or equivalent
b. Add a BeanShell Post Processor  as child of the sampler write code like
variablenamefromstepa")); => replace the name with the value you extracted

Wednesday, February 08, 2012

The 17th law of defects

"If a functionality is not tested for 3 months by business users , the maximum severity that can be assigned to it is S3"

Source : Deepak's book of software testing laws