Thursday, June 02, 2016

(non standard) Use cases for a load test tool

There is a saying that goes "If all you have is a hammer then everything looks like a nail"
Over the years that I have used JMeter , I've used it to implement solutions that would have traditionally been the domain of a programming/scripting language.  What follows , is a discussion of the types of problem that I have solved with it.

I've excluded the standard cases for a load test tool - Performance tests, Capacity planning, Soak tests etc.

Exhaustive tests

A long time ago , when I was a trainee software engineer , I had read a book on testing. One of the examples was to write down the number of test cases that one would need to run to cover all scenarios for the following example - Given 3 lengths as input  , a  program determines if this can make up a legal triangle or not. How many sets of test data do you need to input in order to be confident that the program always works correctly (the answer was seven or 17 and I guessed 2 lesser) - The take away was that testing is hard! In the triangle case the theoretical possible input data is infinite so you have to think to come up with good test data (Hey agile unit testers, ever think about that instead of unit tests and coverage?)
However , there are cases where you can indeed run all possible test data - In my case , there are "Category" pages - There are about 3000 of them, they each have a bunch of attributes (images, titles, alt text, overviews, descriptions etc) - some are mandatory , some optional. If we approach testing this as lets come up with all the possible combinations , it's likely going to take some time. Also as attributes get added or removed , there is a chance something will be missed.
But we could use a different approach - The category (and data) are listed in some DB , its reasonably simple to extract out all of them (in the case of JMeter to a CSV data set) and then just have the test script access each and every one of them. The script can then add validations as needed (e.g. did the alt text appear for image?). The number of categories can increase for 3000 to 30000 or even 300,000 - JMeter would do just fine. As a bonus , I could use this to generate some background load for other tests. Doing it this way takes out some of the guesswork that needs good testers to come up with good variations.
Note : this doesn't , still eliminate the need for good testers (for e.g. we still need to remember to run negative tests - does the expired category show the correct error message ? does a non existent category correctly return the 404 error page ? Does passing an apostrophe in the URL mess up the javascript and is the site vulnerable?)

Exhaustive tests , take 2 , 404s

There is a bulk load process that uploads some safety documents to the site- An index file contains metadata , one of which is a PDF file path to the PDF. The system processes the file , indexes them for search and transfers the PDF to the webserver
This being a real system , coded by real developers, there are bugs. However the bug got discovered 6 months later. Not all the documents are searchable and not all PDFs got transferred (about 300K documents)
Now we could re-request all the documents from a different system (but that would take time) - What we needed to determine is which documents are missing from search and/or are missing PDFs
With JMeter , this was a 30 minute activity to script , and took a couple of hours to run (mainly because it was run against production , we could only use 10 threads in parallel)
The script goes something like
Extract all the bulletin numbers as the data file
Write the script to establish a user session , then search each bulletin num
If result not found those samples are unsucessful
If the results are found , do a HEAD request for the PDF
If not found then that is unsuccessful (but logged by a different listener to its own file)
Listeners only log errors in a CSV file , load it into excel and we are done - we have the bulletin numbers that we pass on to the generating system and say can you please give us all of this again ?

Search tuning

One of the most complained about features on our site is Search. We were fiddling with some of the relevancy settings - but how do we know whether the end result is better or worse?
We used the approach below.
Take the top 500 terms that were searched for. These were then distributed to various business units who specified what they thought the engine should return (we could have also used an independent engine that does better than our internal search i.e. Google). This is the expected result
Then script a test which executed a search for these terms in production. We came up with a formula that scored how well actual matched expected.
Then make the relevancy tweak (in test which was synchronized with production). rerurn the test and see whether it is better / worse than production.
The tool used for the script / report ? JMeter!

ERP migration

We skip to a point where we have migrated ERPs and for some reason , someone(s) decided that we should not have migrated all contracted customer prices - Instead we will manually inspect orders as they come in and fix the price. After doing the above for three months , someone(s) finally realised this isnt a good idea . Ours not to reason why - ours just to sit to cry (over why our salaries have one digit lesser than the worthies who come up with such schemes)
But some higher up also wanted to know - well how many such problems do we have ?
The only way to find that out was to figure out the contracted price for Each customer * product combination (millions of rows) and compare it with actual.
This data cannot be extracted out - Theres only a webservice that takes a single customer -many product and calculates the price.
So I ended up scripting a simple test - The first part hit the old system and saved a data file - The second part hit the new system and flagged discrepancies. Running time - a day (again mainly due to having to run the test against production)

Deadlock detection

We use a framework for our web application - unfortunately a recent code change triggered an existing bug - when some user operations happened in parallel ( a double click or refresh page or back button)- The framework had threads deadlock with each other due to the way the MVC framework (3rd party , commercial , yay!) had synchronizations in its code. We thought we had fixed the problem - but how do you verify ?
Easy - script it in JMeter (simulate issue with bad code , retest with new code)

Cache statistics

We use an old primitive cache. Using the logs from production we could simulate a JMeter test which in turn allowed us to obtain how well/badly caches were being used. As a bonus we also detected a code bug - where a developer had put in an object as a cache key - but hadnt overriden equals - effectively ensuring the cache never got hit

Downloading files

We have a functionality where a user can download a run file for a plate . This "file" is dynamically generated and isn't present in the system anywhere. There was a request from our business unit that they wanted to load all the run files into an instrument. So I scripted a test that spoofed the user actions for each plate and simulated the download and saved it. Bonus - Thats when we discovered that plates with a "/" in their name wouldnt save the file correctly inside the zip.

Website migration

We migrated our website from an old platform to new. Many DNS names, many vanity URLs each with some differing behavior had to be migrated. The tool to check that everything worked ? A JMeter script.



Thursday, September 10, 2015

JMeter committer!

Grabbing a screenshot before the committee realise I dont do anything
http://people.apache.org/committers-by-project.html#jmeter
But its time to do something. As such , probably due to having worked with Visual Basic (and liking it) - looking at Swing makes me cry. The one thing I have always wanted to do with respect to JMeter was to add more real world cases and I think thats what Im going to work on.

Wednesday, May 27, 2015

Graphs for Jmeter using Elasticsearch and Kibana

Disclaimer : I have just done some initial tests with Elasticsearch (Thank you Google) - I have no production experience with it and I have no idea how to set that up. It's possible the example below has really basic mistakes (in which case , please do point them out!). Note also that the easiest way to integrate this is to configure the JMeter result to be a CSV file and simply import that using LogStash - but then I dont have a blog post to write.


JMeter 2.13 came out with some added support for a BackendListener i.e. a way to send your results to some store where you can analyze the test results and draw pretty pictures for the powers that be. This was always possible using a custom listener , but the person writing it would have to make it perform reasonably well. The new Listener runs on its own thread with a Buffer so it makes it somewhat easier to implement. I also stumbled upon the LogStash + Elasticsearch + Kibana family while solving some log parsing problem and so here is an attempt to integrate that with JMeter.

Pre Requisities - This assumes you have downloaded Elasticsearch and Kibana and have managed to get them running and connected to each other.
https://www.elastic.co/downloads/elasticsearch
https://www.elastic.co/downloads/kibana
I used Elasticsearch v1.5.2 and Kibanav4.0.2

Step 1) Define the mapping for the data that we are going to send to ElasticSearch
This step is strictly speaking , not necessary - Elasticsearch can dynamically add indexes or types or fields for a type - More on why I needed to do this later.
For now everything( referred to as a  document) we store in Elasticsearch goes into an Index and has a type. The type is similar to a Class definition - it defines what fields we store , whether they are Strings or Numbers or Dates and what we expect Elasticsearch to do with it (index, analyze, store etc).
In our example, we will store our data into indexes whose name will always be jmeter-elasticsearch-yyyy-MM-dd (I appended the timestamp because Logstash does it by default - This limits the size of a particular  index and that's probably good when you have many tests and distribute Elasticsearch over multiple nodes)
We will define a type called SampleResult that will closely match the JMeter java object SampleResult. We don't really need all the fields , and if we wanted to optimise the interface , we'd probably drop a few. For now we'll just take almost everything other than the response data.

Elasticsearch allows you to define an Index Template , so that all created indexes that use that template have the same setting , so we'll use that feature

Here is the template we will use
{
   "template" : "jmeter*",
   "mappings" : {
            "SampleResult": {
                          "properties":    {
                                 "AllThreads":{"type":"long"},
                                  "Assertions":{"properties":{"Failure":{"type":"boolean"},"FailureMessage":{"type":"string","index":"not_analyzed"},"Name":{"type":"string","index":"not_analyzed"}}},
                                 "BodySize":{"type":"long"},
                                 "Bytes":{"type":"long"},
                                 "ConnectTime":{"type":"long"},
                                 "ContentType":{"type":"string"},
                                 "DataType":{"type":"string","index":"not_analyzed"},
                                 "ElapsedTime":{"type":"long"},
                                 "EndTime":{"type":"date","format":"dateOptionalTime"},
                                 "ErrorCount":{"type":"long"},
                                 "GrpThreads":{"type":"long"},
                                 "IdleTime":{"type":"long"},
                                 "Latency":{"type":"long"},
                                 "ResponseCode":{"type":"string","index":"not_analyzed"},
                                 "ResponseMessage":{"type":"string","index":"not_analyzed"},
                                 "ResponseTime":{"type":"long"},
                                 "SampleCount":{"type":"long"},
                                 "SampleLabel":{"type":"string","index":"not_analyzed"},
                                 "StartTime":{"type":"date","format":"dateOptionalTime"},
                                 "Success":{"type":"string","index":"not_analyzed"},
                                 "ThreadName":{"type":"string","index":"not_analyzed"},
                                 "URL":{"type":"string","index":"not_analyzed"},
                                 "sku":{"type":"string","index":"not_analyzed"},
                                 "RunId":{"type":"string","index":"not_analyzed"},
                                 "timestamp":{"type":"date","format":"dateOptionalTime"},
                                 "NormalizedTimestamp":{"type":"date","format":"dateOptionalTime"}
                 }
          }
   }
}

We indicate that our template should apply to any index whose name begins with jmeter*
We also define a mapping called SampleResult and we define what properties it uses and their types. We also define some properties to be "not_analyzed" - Mainly this field should be indexed and searchable but no analysis is needed on this field. If I didn't set this value then some of the URLs in my example use '-' which is generally a word separator for Search tools (like Lucene which is what Elasticsearch uses) and that was messing up my graphs. This is the main reason for me to define the mapping explicitly (especially as Elasticsearch doesnt allow you to change this value once defined , short of deleting and recreating)

We now need to tell Elasticsearch about this template. Thats done by making a HTTP PUT request to a URL. I use Poster for this purpose. Fill in the server and (http) port used by Elasticsearch and paste the template into it and PUT.

The server should acknowledge it.
You can view that all is successful by typing http://[elasticsearchserver]:[httpport]/_template

Step 2 : Implement JMeter's BackendListenerClient
In order to get JMeter to pass our code the results , we need to implement the BackendListenerClient interface. This has following methods
setupTest/teardownTest - As the name suggests these are initialization/cleanup methods . They are called once each time the test is run.
getDefaultParameters - These are useful to hint things to the end user (e.g. the parameter name for the Elasticsearch server and a default value for it like 'localhost')
handleSampleResults - This is the method JMeter will invoke to provide the results of the tests. A List of SampleResult is provided as input , the size of the list will probably be upto the Async Queue Size field that is specified when the Listener is configured
createSampleResult - This allows you to create a copy of the SampleResult so that if you modify it in some way , it doesn't change anything for the rest of the Listeners.

The javadoc for BackendListenerClient helpfully tell you to extend AbstractBackendListenerClient instead of implementing BackendListenerClient directly -so that is what we will do. This class has a few additional methods (and since I haven't been able to figure out what they do , we will simply pretend they don't exist). We can also download the Java Source code and read the existing implementation for GraphiteBackendListenerClient (basically copy-paste as much as we can - a Java development best practice)

So with that here are some snippets - The complete source / project / jar is at [1]
    @Override
    public Arguments getDefaultParameters() {
        Arguments arguments = new Arguments();
        arguments.addArgument("elasticsearchCluster", "localhost:" + DEFAULT_ELASTICSEARCH_PORT);
        arguments.addArgument("indexName", "jmeter-elasticsearch");
        arguments.addArgument("sampleType", "SampleResult");
        arguments.addArgument("dateTimeAppendFormat", "-yyyy-MM-DD");
        arguments.addArgument("normalizedTime","2015-01-01 00:00:00.000-00:00");
        arguments.addArgument("runId", "${__UUID()}");
        return arguments;
   }
We define some parameters that we will use. For e.g. the elasticsearchCluster element will define which Elasticsearch server(s) we need to connect to. The indexName will define the prefix used to create the index (remember that our template will get applied to all indexes that begin with "jmeter". The dateTimeAppendFormat will be a suffix we use to append to the indexName. We can ignore the other parameters for now

Next by referring to the Java API for Elasticsearch we can write some code to connect and disconnect from the cluster in the setup and teardown methods.
@Override
public void setupTest(BackendListenerContext context) throws Exception {
        String elasticsearchCluster = context.getParameter("elasticsearchCluster");        
        String[] servers = elasticsearchCluster.split(",");
        
        indexName = context.getParameter("indexName");
        dateTimeAppendFormat=context.getParameter("dateTimeAppendFormat");
        if(dateTimeAppendFormat!=null && dateTimeAppendFormat.trim().equals("")) {
            dateTimeAppendFormat = null;
        }
        sampleType = context.getParameter("sampleType");
    client = new TransportClient();
    for(String serverPort: servers) {
        String[] serverAndPort = serverPort.split(":");
        int port = DEFAULT_ELASTICSEARCH_PORT;
        if(serverAndPort.length == 2) {
            port = Integer.parseInt(serverAndPort[1]);
        }
        ((TransportClient)client).addTransportAddress(new InetSocketTransportAddress(serverAndPort[0], port));
    }
        //snip
    super.setupTest(context);
    }


@Override
public void teardownTest(BackendListenerContext context) throws Exception {
    client.close();    
    super.teardownTest(context);
}


The setupTest code reads the elasticsearchCluster parameter (this would get set in the JMeter GUI by the user), parses it , sets the apropriate values to the Transport client (note that the JAVA API port - default 9300 is different from the HTTP API port - default 9200). The teardownTest shuts down the client.

And with that we get to the bulk of our code (again this code might be inefficient)
First lets create a Map from a JMeter Sample result object. Elasticsearch can treat a Map object as if it was a JSON object.
private Map<String, Object> getMap(SampleResult result) {
    Map<String, Object> map = new HashMap<String, Object>();
    map.put("ResponseTime", result.getTime());
    map.put("ResponseCode", result.getResponseCode());
        //snip lots more code
    return map;
}

Next lets use the API to create a document
@Override
public void handleSampleResults(List<SampleResult> results,
        BackendListenerContext context) {
    String indexNameToUse = indexName;
    for(SampleResult result : results) {
        Map<String,Object> jsonObject = getMap(result);
        if(dateTimeAppendFormat != null) {
            SimpleDateFormat sdf = new SimpleDateFormat(dateTimeAppendFormat);
            indexNameToUse = indexName + sdf.format(jsonObject.get(TIMESTAMP));
        }
        client.prepareIndex(indexNameToUse, sampleType).setSource(jsonObject).execute().actionGet();                    
    }
}

We simply write code to dynamically create an index name based on the timestamp of the SampleResult object and we tell Elasticsearch to prepare the Index (which it will create if its not existing, then pass in the Map object(the document we want to store) that we just created and execute this - i.e. add this document to the index.

Next we compile and create a JAR file for our code and put it in $JMETER_HOME/lib/ext.
Dependencies are uploaded in .classpath file

We also need to add the dependent jars for JMeter (Elasticsearch API , Lucene) into $JMETER_HOME/lib. Copy them from the Elasticsearch installation.
elasticsearch-1.5.2.jar
lucene-core-4.10.4.jar
lucene-analyzers-common-4.10.4.jar

Step 3: Modify the JMeter test to use the Listener we created
We should be able to add the BackendListener like any other Listener to our JMeter plan (Add --Listener -- BackendListener)
In the Backend Listener Implementation dropdown , we should get the class we created in the drop down and selecting it should fill the Parameters section with the parameters we returned in the getDefaultParameters method. If not then we need to check the logs, see that we compiled our code with the same version that we are running JMeter on or whether we copied the JAR to the right place, whether we copied the dependencies etc). Next we update the value for elasticsearchCluster parameter to point to our server and we can then run the test.


Lets assume we are running the test and the Log isn't printing error after error.
Elasticsearch HTTP API will allow you to see data - but we can also use a plugin like Elasticsearch Head . Install this plugin. Next we navigate to http://server:httpport/_plugin/head/ and we can see if our index is created and whether it is getting data. Looks like it is



Step 4 : Looking at the data in Kibana
If Kibana isn't started already , start it

Navigate to the Kibana Server :Port in the browser and click Settings and the Indices tab
Our indices are named jmeter so thats what we will give in the index pattern field. Ensure that Index contains time based events is checked and in the drop down for Time-Field name choose timestamp


On the next screen click the star icon to make this index pattern the default - We can also see the fields that match what we defined in our template.


Now click Discover - If you just finished running your test or it's still running , you should be able to see some data. Kibana's default is Last 15 minutes , but you can change that by clicking the top right hand corner. Note that the index selected is jmeter* - which is what we want . Change it if its something else by clicking the settings on upper right hand side




Lets try and see 90% response times. Click Visualise. It asks you what type of chart you want - lets use a Line chart and a new Search. When prompted use jmeter* for the index pattern.

So on the Left hand side - we can choose for the Y axis , the aggregation is "percentile" , the field as "Response Time" and value for the Percentile as 90. For X axis we choose Aggregration as Date Histogram and the field as Timestamp and the Interval as Auto.
And apply .


And we see a nice graph for the 90% across all Samples - Nice but not very useful.

But thats  ok - we can also filter data by typing in queries - In the example below , only use the Category pages(Where the JMeter Sample Label is "Request Category Page" to calculate the percentiles.


Lets try and drill it down further. In my example JMeter script , it iterated over a number of URLs (category pages) and the JMeter Request had a common label "Request Category Page". Lets see the top 3 worst performing categories. The Y Axis remains the same. For the buckets we first choose Split Line and for the  Aggregation we use "Terms" for Field we choose URL (as each Category has its own URL) and we choose  the Top 3 URLs ordered by 90%  and then we add a X axis Sub Aggregation of Date Histogram on the timestamp field.


You can even save this visualization and add it to a Dashboard.
Once the data is in Elasticsearch ,all the features within Elasticsearch and Kibana are available.

Lets look at a couple of more involved examples

Saving variables
Lets assume that one of the steps in the test script is a Search script. The script reads a set of terms from a file into say a variable (sku) and executes a Search transaction. However the application we are testing has some issues. If a search term is a complex phrase, returns too many results or has too many rules defined for it (Synonyms, etc) then it takes time. We want our results to be analyzed either by Search as a whole or by the specific terms that we searched for. One option is to modify The Sample label to be something like Search - ${SKU} and use wildcards whenever we want to see Search etc. But it would be cleaner if we could still have the SampleLabel as Search and somehow save the variable "sku" and its value as part of the document we index.
A normal listener has access to the vars object or can get the variables from the Thread. But the BackendListener interface only has a SampleResult and no access to the original thread due to the Async nature. An option would be to modify the SampleResult (by a different listener) to add these variables into it (in a Sub SampleResult or something). But in this example since I only want a single variable , lets just use a poor mans delimiter with SampleLabel.
So what we will do is Change the SampleLabel to be something like
Search~sku=${SKU}
a) We will tell Elasticsearch about this field in the mapping (well actually we already did it - If you look at the mapping we specified we already had a "sku" field defined  , a not_analyzed string
b) We then modify our JMeter script to change the SampleLabel

c) In our Listener we change our code to parse the SampleLabel and set the values correctly in the map method , where we transform the JMeter result into a Map object
String[] sampleLabels = result.getSampleLabel().split(VAR_DELIMITER);
map.put("SampleLabel", sampleLabels[0]);
for(int i=1;i<sampleLabels.length;i++) {
        String[] varNameAndValue =sampleLabels[i].split(VALUE_DELIMITER);
        map.put(varNameAndValue[0], varNameAndValue[1]);
}

And thats it (well compile , copy the Listener jar and restart JMeter). Run the JMeter script and we should start seeing data. Heres the 90% graph for Search in General - Y Axis is aggregated by Percentile and X axis is a Split Line on a TermsAggregation on Field SampleLabel sub aggregated by a Date Histogram on timestamp

And heres the same thing for individual SKU -Y Axis is aggregated by Percentile and X axis is a Split Line on a TermsAggregation on Field sku sub aggregated by a Date Histogram on timestamp


Any variable can be done in the same way (or multiple variables) using ~ as a separator and key=value for the variable..


Comparing runs
A common performance issue is you have a set of data and you want to make a change and compare the runs before and after the change. Lets try and compare graphs. I actually couldnt figure out how to do this in Kibana so lets do a quick and dirty fix -
1) We'll settle on an epoch date . Lets say 2015-01-01 00:00:00. No matter what time we started our test actually, we are going to pretend we started it on 2015-01-01 00:00:00. For all requests that are part of the scripts , we are going to shift the time so that the times are all relative to 2015-01-01 00:00:00.
2) Because of 1) , we now don't actually have to compare different timeseries , all the results will appear to be on the same time frame. However we do need a way to differentiate the two runs. So we'll introduce another variable and call it runId. This will just need to be something unique so that we can differentiate the runs

a) We add these new fields to the mapping for Elasticsearch(well we did it already) - the fields are NormalizedTimestamp and RunId
b) We modify the java code to add these parameters and default values - that was done in the first code snippet too - normalizedTimestamp and runId
c) Next we modify the Listener code to add this new timestamp
We calculate the offset of the startTime from the epoch
String normalizedTime = context.getParameter("normalizedTime");
if(normalizedTime != null && normalizedTime.trim().length() > 0 ){
    SimpleDateFormat sdf2 = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSSX");
    Date d = sdf2.parse(normalizedTime);
    long normalizedDate = d.getTime();
    Date now = new Date();
    offset = now.getTime() - normalizedDate;
}


And then we simply subtract this offset from the current time in the mapping code
map.put("NormalizedTimestamp", new Date(result.getTimeStamp() - offset));

And now we can run our tests (after compile, copy jar and restart JMeter). So lets run the test once  and then run it again (after an interval), preferably make some changes (for e.g. I turned off cache and then turned it back on
On the Kibana homepage , if we change the interval to cover both the runs , we can see that we do have some data for two separate runs . (We could note down the RunIds and just filter for them - This  would be needed if we had run tests in between - but we didnt so we are good)


Next go back to Visualize and this time the Y Axis can still be a 90 Percentile Aggregation of Response time - Then use a Split Lines over a Terms Aggregation of RunId , A Split Chart over a Terms Aggregation of SampleLabel , and finally an X Axis date histogram of NormalizedTimestamp . Note that the time filter still operates on the timestamp field so we have to ensure that it covers the actual times of both runs


[1] Source code and Jar available at https://onedrive.live.com/?cid=1BD02FE33F80B8AC&id=1bd02fe33f80b8ac!953
Note JAR built with Java 1.8
















Thursday, January 02, 2014

Modifying JMeter scripts programatically

Problem : Is there any scripting or GUI way to massively change from WebService(SOAP)
Request (DEPRECATED) to SOAP/XML-RCP Request? http://jmeter.512774.n5.nabble.com/Question-about-WebService-SOAP-Request-DEPRECATED-td5718693.html

Solution : The JMeter script file is an actual XML file so any solution that manipulates XML can be used to solve the above problem.  As Michael Kay's excellent books on XSLT were staring at me , the way I implemented this was in XSLT.
The first thing we need to do was use what is called an identity transform
<xsl:template match="@*|node()">
 <xsl:copy>
  <xsl:apply-templates select="@*|node()" /> 
  </xsl:copy>
</xsl:template>

i.e. essentially copy the source XML to the destination. With that done we can now get to the problem at hand - which is intercept all the deprecated requests and transform them to the new one

In order to do so we can create a JMeter script which has both of these elements and look at what XML gets generated so that we know the source and destination XML we want
The deprecated XML
 <WebServiceSampler guiclass="WebServiceSamplerGui" testclass="WebServiceSampler" testname="WebService(SOAP) Request (DEPRECATED)" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain">server</stringProp>
          <stringProp name="HTTPSampler.port">port</stringProp>
          <stringProp name="HTTPSampler.protocol">http</stringProp>
          <stringProp name="HTTPSampler.path">path</stringProp>
          <stringProp name="WebserviceSampler.wsdl_url"></stringProp>
          <stringProp name="HTTPSampler.method">POST</stringProp>
          <stringProp name="Soap.Action">ws-soapaction</stringProp>
          <stringProp name="HTTPSamper.xml_data">ws-SOAP-DATA</stringProp>
          <stringProp name="WebServiceSampler.xml_data_file">C:\Users\dshetty\Downloads\apache-jmeter-2.9\bin\logkit.xml</stringProp>
          <stringProp name="WebServiceSampler.xml_path_loc"></stringProp>
          <stringProp name="WebserviceSampler.timeout"></stringProp>
          <stringProp name="WebServiceSampler.memory_cache">true</stringProp>
          <stringProp name="WebServiceSampler.read_response">false</stringProp>
          <stringProp name="WebServiceSampler.use_proxy">false</stringProp>
          <stringProp name="WebServiceSampler.proxy_host"></stringProp>
          <stringProp name="WebServiceSampler.proxy_port"></stringProp>
          <stringProp name="TestPlan.comments">comment</stringProp>
        </WebServiceSampler>


The non deprecated SOAP request
        <SoapSampler guiclass="SoapSamplerGui" testclass="SoapSampler" testname="SOAP/XML-RPC Request" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="SoapSampler.URL_DATA">url</stringProp>
          <stringProp name="HTTPSamper.xml_data">data</stringProp>
          <stringProp name="SoapSampler.xml_data_file">c:/test.xml</stringProp>
          <stringProp name="SoapSampler.SOAP_ACTION">soapaction</stringProp>
          <stringProp name="SoapSampler.SEND_SOAP_ACTION">true</stringProp>
          <boolProp name="HTTPSampler.use_keepalive">false</boolProp>
          <stringProp name="TestPlan.comments">comment</stringProp>
        </SoapSampler>

Knowing this , the template is easy
  <xsl:template match="WebServiceSampler ">
    <xsl:element name="SoapSampler">
        <xsl:attribute name="guiclass">SoapSamplerGui</xsl:attribute>
        <xsl:attribute name="testclass">SoapSampler</xsl:attribute>
        <xsl:choose>
            <xsl:when test="@testname ='WebService(SOAP) Request (DEPRECATED)' ">
                <xsl:attribute name="testname">SOAP/XML-RPC Request</xsl:attribute>
            </xsl:when>
            <xsl:otherwise>
                <xsl:attribute name="testname"><xsl:value-of select="@testname"/></xsl:attribute>
            </xsl:otherwise>
        </xsl:choose>
        <xsl:attribute name="enabled"><xsl:value-of select="@enabled"/></xsl:attribute>
        <elementProp name="HTTPsampler.Arguments" elementType="Arguments">
            <collectionProp name="Arguments.arguments"/>
        </elementProp>
        <stringProp name="TestPlan.comments"><xsl:value-of select="stringProp[@name='TestPlan.comments']/text()" /></stringProp>
        <stringProp name="SoapSampler.URL_DATA"><xsl:value-of select="stringProp[@name='HTTPSampler.protocol']/text()" />://<xsl:value-of select="stringProp[@name='HTTPSampler.domain']/text()" />:<xsl:value-of select="stringProp[@name='HTTPSampler.port']/text()" /><xsl:value-of select="stringProp[@name='HTTPSampler.path']/text()" /></stringProp>
        <stringProp name="HTTPSamper.xml_data"><xsl:value-of select="stringProp[@name='HTTPSamper.xml_data']/text()" /></stringProp>
        <stringProp name="SoapSampler.xml_data_file"><xsl:value-of select="stringProp[@name='WebServiceSampler.xml_data_file']/text()" /></stringProp>
        <stringProp name="SoapSampler.SOAP_ACTION"><xsl:value-of select="stringProp[@name='Soap.Action']/text()" /></stringProp>
        <stringProp name="SoapSampler.SEND_SOAP_ACTION"><xsl:value-of select="(stringProp[@name='Soap.Action']/text() != '')" /></stringProp>            
        <boolProp name="HTTPSampler.use_keepalive">false</boolProp>        
    </xsl:element>
  </xsl:template>

Where we just map the older elements into their newer forms (wherever possible). So for example we match an element named "WebServiceSampler" and replace it with "SoapSampler" . We transform the testname to either use the default if the deprecated element used the default or we use the name the user specified. We copy attributes and nested elements.
 You can then use ANT or any other way to style your input scripts into newer ones for one or many files
<target name="transformWS">
    <xslt
        in="CompareXML-RPCRequest.xml"
        out="test.jmx"
        style="transformWS.xsl"/>

</target>
An alternate approach would have been to generate java code that could parse XML and remove/replace objects but the identity transform mechanism makes this technique hard to beat. Any Object binding mechanism would run into issues when JMeter changes it schema but the XSLT technique is limited to only having to worry about source and destination.
I've also always toyed with the idea of creating a functional suite of tests that user's would  run at will without being limited to the tests defined in the file and this is the approach I would use there as well - more to come.

Example - https://skydrive.live.com/?cid=1BD02FE33F80B8AC&id=1BD02FE33F80B8AC!892

Monday, April 22, 2013

Simulating Abanadoned flows in JMeter

A user wanted to simulate abandoned flows in JMeter - e.g.a  multi step checkout scenario where some number of users drop off at every step

This was my suggestion

+Sampler1
+ThroughPutController (90 , percent executions , uncheck per user = 10% abandon)
++Sampler2
++ThroughPutController (80 , percent executions , uncheck per user = 20% more abandon at this step)
+++Sampler3



Wednesday, March 27, 2013

Sharing session ids across threads in jmeter

Problem i want to use the same JSESSIONID across the test plan which includes several Thread Groups(http://mail-archives.apache.org/mod_mbox/jmeter-user/201303.mbox/%3Cloom.20130311T124606-446@post.gmane.org%3E).
Which is actually two problems
a. How do I share data between threads in JMeter?
b. How do I connect to an existing session in JMeter given the session id?
Note : A good question to ask at this stage is do I really , really want to do this? Usually a Jmeter thread maps to actions that a user can take (AJAX aside) - doing something like the above goes away from the JMeter model of things and an alternative to consider is why don't I change my test so that a user's session is only needed for a thread? (mini rant - This is a common scenario in support where there is a problem X , the user thinks that doing A will solve his problem, A needs B to be done , B needs C and C needs D and the user then asks "How do I do D?"without revealing his cards - in this case why is the test plan for a user split across multiple thread groups?) The user should really be asking how do I solve problem X because some person might say why not do Action Y which is simple) .

But because we will have a blog post to write , we will assume the answer to the above question is Yes, I really want to do this and this is the best way to solve the problem I have.

How to share data between threads in JMeter
JMeter's only out of box way to share data between threads is the properties object and the shared namespace in BeanShell[2] both of which have limitations. The properties object needs you to come up with some scheme to share data and also needs you to write your own code for e.g. if you consume data much faster than you can generate it.

One technique is that if ALL the data that is to be shared is to be obtained before the next threads need it then one option is to use the CSV Data Set config - The first set of threads write to the file and the next set of threads read from this file - but essentially any shared memory(e.g. a file, a database etc) will do. However if you need to read and write this data at the same time then another way is to use the standard Java data structures which are thread safe.

In this post we will look at two ways
a. The CSV data set config
b. Using a java object
Note that there is a plugin available which will do b. from Jmeter plugins InterThreadCommunication [1] . If you aren't a programmer then the plugin is probably best for you - though I will say that good testers these days need to have programming and scripting skills.

However we will just write our own for fun. You also might need to do this to implement a complicated scheme.

Before we begin we will need an application to allow us to test whatever we write. So I have deployed a web application which has 2 URLs
a. set.jsp which sets an attribute into the session with the name uuid and the value a random generated value and also returns it as part of its output. Since this is the first page we access it will also create a session and set a cookie.


b. get.jsp which reads the value of the attribute uuid and returns it as its output. If you have the same session and previously accessed set.jsp then you should see the same value as step a.
If you dont have a session(or didnt access set.jsp) then you will see the value as "null"


So let us first write a simple test to verify that all is well. Here is a screen of the test.


We simply access set.jsp. Extract out its response . Then we access get.jsp and assert that the value we extracted out matches whatever is being returned by get.jsp. If the same session is being used for both the requests then the assertion will pass , otherwise it will fail.

Remember that Java web applications allow sessions to be maintained in two ways - either the Session ID is passed as part of the URL or as a cookie (usually named JSESSIONID) or both.
For this example we will assume we are using cookies.

Lets cause the test to fail - A simple way of doing this is by disabling the HTTP Cookie Manager. If we disable this JMeter wont accept cookies and hence wont be able to maintain sessions and every request will be a new session.

So we disable the HTTP Cookie manager and run the test. It fails as we expected it. Lets verify that it's because of the session
a. Note the first request gets a Set-Cookie back from the server - the server is trying to get the client to store the SessionID so that the next request can send the SessionID back

b. Note that the next request does not send a cookie header from JMeter

c. Note that the server responds to this request with another set-cookie (because it will try to create a new session as it didnt get any existing session cookie


Now lets enable the HTTP Cookie Manager. It works!. Lets verify the cookie was passed


Now that we have some degree of confidence that our assertions work (i.e. failures are flagged as failures and successes as successes) lets rerun the test with multiple threads and multiple iterations. All work. We can also see that different set requests get different cookies. This is because a cookie manager is scoped to a single thread (so each thread gets its own cookie and session) and because we have checked  "Clear cookies at each iteration" on the cookie manager.

Sharing Data using Java Objects in realtime
Lets first create a class that will allow data to be shared. For this purpose we use a  LinkedBlockingQueue  [4]. The javadoc for the interface BlockingQueue.
"Note that a BlockingQueue can safely be used with multiple producers and multiple consumers."
Which is a fancy way of saying something can be accessed by multiple threads without the programmer needing a degree in Computer Science. Since we will be reading and writing from different threads in different thread groups , the fact that this data structure natively supports thread safety frees us up from having to implement it - no synchronized keywords are needed in our code
However we do need to store this queue object somewhere , so we simply create a wrapper class that holds on this Queue statically and provide getters and setters. Because we dont know the rate at which we will read/write we simply put a timeout on the get.
package org.md;

import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.TimeUnit;

public class Queue {
    private static LinkedBlockingQueue<Object> q = new LinkedBlockingQueue<Object>();
    public static void put(Object o) throws Exception{
        q.put(o);
    }
    public static Object get(long timeout) throws Exception{
        return q.poll(timeout, TimeUnit.MILLISECONDS);
    }
    public static void clear() throws Exception{
        q.clear();
    }
    public static void main(String [] args) throws Exception{
        
        Queue.put("one");
        Queue.put("two");
        //q.clear();
        System.out.println(Queue.get(1000));
        System.out.println(Queue.get(1000));
        System.out.println(Queue.get(1000));
    }
}


We run a simple single threaded test on this class to see that everything is fine (it is), we compile this into a jar, place the jar in JMeter's lib directory and our class is ready to be used.
 The test structure is shown in the screen below



a. The setup thread group clears any data that might already be there in the queue. Because we are holding onto the Queue statically and the JMeter GUI is a single JVM , we need to clear out any data that might be there from a previous run. This is the code in the Beanshell sampler.

org.md.Queue.clear();

b. We create a threadgroup and use a request to create the sessions , extract out the data that we need - the sessionid and the value returned by the page so that we can assert that we are connecting to the same session. We then call the class we have just created to put these values into the queue (as an array with two values) and finally we introduce a random delay values
The regular expression to extract the cookie is in the screenshot above

And here is the BeanShell code to add the cookie and the extracted value to the Queue

String[] data = new String[2];
data[0] = vars.get("jsessionid");
data[1] = vars.get("uuid");
org.md.Queue.put(data);

c. Next we create a threadgroup that is going to use the session ids set from the previous threadgroup using a BeanShell preprocessor. The preprocessor also has the code to add the session id cookie so that requests will connect to the existing session. As before we assumed that the Server manages session in the form of cookies. But we could also have it as part of the URL once we are able to successfully read the SessionID.
We also configure the thread groups to have different number of threads so that there will be some variation in the rate of requests (note that if you have more Sessions created in the "set" threadgroup than you can process in the "get" threadgroup you will eventually run out of memory unless you make the queue bounded in size. If you have more in the "get" threadgroup then they will keep waiting (hence we have a timeout when we ask for a session id)

import org.apache.jmeter.protocol.http.control .CookieManager;
import org.apache.jmeter.protocol.http.control.Cookie;
String[] data = (String[])org.md.Queue.get(60000);
if(data != null) {
    vars.put("jsessionid",data[0]);
    vars.put("uuid",data[1]);
    Cookie cookie = new Cookie("JSESSIONID",vars.get("jsessionid"),"localhost","/", false,-1);
    CookieManager manager = sampler.getCookieManager();
    manager.add(cookie);
} 

The code above simply asks the queue for a value, and will wait upto 60 seconds for it (This only applies if we are consuming the data much faster than we can produce it). Then if we did get a valid value we use the JMeter API's[3] to access the Cookie Manager and add a cookie to it. Note that we do need a Cookie Manager for this to work.  Some values have been hardcoded for simplicity (like the domain name)

We run the test and its all successful! Remember we do have assertions so a success is indeed a success. We can also verify using the view results tree listener and we can also check the cookie is being sent in the GET request.


Cons : Note that anything that needs data to be shared between threads , that has wait and synchronization( irrespective of whether you do it or whether the JVM does) adds an overhead to the test - so the amount of load you can generate from a single JMeter instance will reduce. Also if you need more sophisticated schemes to share data (we used a simple FIFO model) then your code is more complicated , more prone to errors and likely to add more overhead.

Sharing data using files
Using a file is usually not a good candidate for sessionids because sessionids are short lived and may timeout. You also need to have everything available in the file before you can start reading the file. They are more suited to usecases like testing for Remember me or keep me signed in type of cookies. However the flexibility a CSV data set config provides may make it worth your time.
Here is the structure of the test.

The setUp thread group sends a set of requests to create the session and get the values as before.  To write all these values to a file we use a simple data writer and configure JMeter (by modifying jmeter.properties) to write the two variables we are interested in only. Note that this is probably not a good way to do it - in addition JMeter appends the data to an existing file so either you need to delete the file before you run this or use a property that varies dynamically so that a new file is created.

Remember that the setUp thread group will always execute before any other threadgroup , so this file will be created and have all the data before the next thread group runs.  The Simple data writer unchecks all the checkboxes and we edit jmeter.properties to have the following line


 sample_variables=jsessionid,uuid

The next threadgroup simply reads the file and attaches the session id to the cookie manager.

We can have as many threads as we want (we ensure that clear cookies at every iteration is checked so that every time we request something we use the cookie that is set by the beanshell preprocessor and not the one the cookie manager had previously saved). The Beanshell code to add the cookie is the same as before

import org.apache.jmeter.protocol.http.control .CookieManager;
import org.apache.jmeter.protocol.http.control.Cookie;

Cookie cookie = new Cookie("JSESSIONID",vars.get("jsessionid"),"localhost","/", false,-1);
CookieManager manager = sampler.getCookieManager();
manager.add(cookie);
 
we run the test - all successful. The file generated has the values


3C7C0BD5CA87C16FA65EAECEEBB54CF6,2db51396-6db2-44cb-9336-c732e52f8ac1
5E83FA9D286583DE32E6CC7C4E799599,f173a3ac-9779-408e-8cdb-7d23dfd5e2df
AFB458860FDF0376B9C8B9F52117A668,9f2da7f4-e848-4267-af9d-806e229f9c9f
540A42700742141A8BB8C2941666C8C9,84355a19-d2e1-45b7-ac42-23e9ab03a195
A7791553761A3613064E6BF7564B76E9,c3b14e92-93bc-4500-93be-ecbe43e70a5a
DB79953CEBA4C6E4FE708AE127E53F26,c99a880d-297d-4c00-9ff2-964449d9d106
3A7BBBC042EDB07D03901EEED8D37DC1,b2b29cb8-5261-4345-9405-80973d9f7334
913AFEC1A2D22EF6C64D7B99675DC86B,13bb3aa1-36cc-4e60-942c-d3a29a74b99f
DD06AEC9AF9D7CFF4FFB0AA2462E63C0,51b7b486-e503-43f8-a3ca-e36eb0df88a8

The cookies get added to the request

We can change the CSV data set config to keep reading the same file instead of terminating at the end - that works as well. You can also check that if you disable the setup threadgroup and rerun the test , it still works (because the sessions are still active - assuming they havent timed out) - so if the data you are saving in this file is still valid it can be used anytime and anywhere. You can also bounce the server in which the JSP web application was deployed. This causes all the sessions to get invalidated and hence the file to have invalid data. Now if you run the test , all the samples will fail because all the sessions are invalid - so even though we added the session id , the server has no data for this session.

Cons : As mentioned above , this isn't realtime sharing , it relies on you already having the data to be used , and the data being valid.

Files used in this post available at https://skydrive.live.com/#cid=1BD02FE33F80B8AC&id=1BD02FE33F80B8AC!886

References
[1] InterThread Communication- http://code.google.com/p/jmeter-plugins/wiki/InterThreadCommunication
[2] BeanShell Shared namespace - http://jmeter.apache.org/usermanual/best-practices.html#bsh_variables
[3] Cookie Manager - http://jmeter.apache.org/api/org/apache/jmeter/protocol/http/control/CookieManager.html
[4] LinkedBlockingQueue - http://docs.oracle.com/javase/6/docs/api/java/util/concurrent/LinkedBlockingQueue.html



Friday, May 25, 2012

JSON in JMeter

http://jmeter.512774.n5.nabble.com/Require-help-in-Regular-Expression-td5713320.html
Roughly user gets a JSON response and wants to use some of that data for the next JSON request. Using a regular expression to extract out multiple related data is sometimes painful. So the alternative I suggest is to use JMeter to parse the JSON object and then use it (The other would be to have JMeter to natively support JSON , but unless JMeter also has a mapping tool , this only saves two lines of code , so I don't think its that useful).

So first we download Douglas Crockford's reference implementation from JSON in java , compile it and generate a jar and copy it into JMeter's lib directory (please ensure you compile it with a java version that is compatible with whatever you will use to run JMeter) - and don't use this library for evil.

Next launch JMeter.


To spoof this example , we create a BeanShell sampler that will return the JSON string we are interested in. To this sampler we add a BeanShell post processor that will parse the response and form the object that we need for the next request. The code is given below

import org.json.JSONArray;
import org.json.JSONObject;

String jsonString = prev.getResponseDataAsString();
JSONArray equipmentParts = new JSONArray(jsonString);
JSONArray parts = new JSONArray();

for(int i=0;i<equipmentParts.length();i++ ){
    JSONObject equipmentPart = equipmentParts.getJSONObject(i).getJSONObject("equipmentPart");
    JSONObject allAttributes = equipmentPart.getJSONObject("allAttributes");
    JSONObject part = new JSONObject();
    part.put("partId",allAttributes.getLong("equipmentPartId"));
    part.put("partNumber",allAttributes.getString("compositePartName"));
    // add more here
    parts.put(part);
}

vars.put("jsonResponse", parts.toString());
Note we first get the response as a string. Then we just use the JSON libraries to parse this string into a JSON array (equipmentParts). After which we iterate over every object, extract out the attributes we are interested in (you could even have an array of attributes and have a simpler loop) and then we start forming the object we want to post (i.e. the parts JSON array and the part JSON object)

Next we simply convert the JSON parts array into a String to be used by the next sampler by setting it into the jmeter variable jsonResponse.


The next sampler simply uses a POST and only specifies the variable ${jsonResponse} in the value field(So that it will be used as is in the body).
We run the script. The Debug Sampler is used to verify that the value we setup is correct and that it does get posted in the next request



Sample code is available here https://skydrive.live.com/#cid=1BD02FE33F80B8AC&id=1BD02FE33F80B8AC!876