Life Science Research
Clinical Diagnostics
Spectroscopy
Process Separations
Food/Animal/Environment Testing
Life Science Education
Thursday, June 25, 2009
Wednesday, June 10, 2009
Fixing URLs
Motivation
For some tests, the URL to visit is extracted from the previous request using one of the PostProcessors (usually the Regex PostProcessor). However a URL containing parameters should normally have ampersands(&) escaped . This causes problems because JMeter will not automatically unescape these URL's
Solution
Use a javascript function to unescape the urls.
Sample
Assume that the Regex Post processor has extracted the url into a variable named returnUrl, then in the next HTTPSampler (Path field), instead of using ${returnUrl} use the function below.
${__javaScript('${returnUrl}'.replace(/amp;/gi\,''))}
This will simply replace amp; with a blank string so that a URL of the form http://www.yoursite.com/page?param1=value1¶m2=value2 becomes
http://www.yoursite.com/page?param1=value1¶m2=value2
Note that we don't need to do this for encoded values like %2F or whatever because that is taken care by the webserver
For some tests, the URL to visit is extracted from the previous request using one of the PostProcessors (usually the Regex PostProcessor). However a URL containing parameters should normally have ampersands(&) escaped . This causes problems because JMeter will not automatically unescape these URL's
Solution
Use a javascript function to unescape the urls.
Sample
Assume that the Regex Post processor has extracted the url into a variable named returnUrl, then in the next HTTPSampler (Path field), instead of using ${returnUrl} use the function below.
${__javaScript('${returnUrl}'.replace(/amp;/gi\,''))}
This will simply replace amp; with a blank string so that a URL of the form http://www.yoursite.com/page?param1=value1¶m2=value2 becomes
http://www.yoursite.com/page?param1=value1¶m2=value2
Note that we don't need to do this for encoded values like %2F or whatever because that is taken care by the webserver
Tuesday, June 09, 2009
Data DrivenTesting(from a database)
Motivation
Data to a test needs to be varied , however the input needs to be constrained to a set of values(normally in some database table). E.g. check the prices of the top 10 items. Here we need to provide 10 item ids , however if these values are kept in a CSV file , then the file needs to be repeatedly updated. There needs to be a way to get these values to the test at runtime
Solution
Use the JDBC Request Sampler to fetch the data at runtime. Use either the Save Responses to a file or a BeanShell post processor to write the data to a file. Finally use the CSV Data Set Config to read this data.
Sample
Data to a test needs to be varied , however the input needs to be constrained to a set of values(normally in some database table). E.g. check the prices of the top 10 items. Here we need to provide 10 item ids , however if these values are kept in a CSV file , then the file needs to be repeatedly updated. There needs to be a way to get these values to the test at runtime
Solution
Use the JDBC Request Sampler to fetch the data at runtime. Use either the Save Responses to a file or a BeanShell post processor to write the data to a file. Finally use the CSV Data Set Config to read this data.
Sample
Sunday, June 07, 2009
Varying the data to the test
Motivation
The same test's need to be run, but the data we pass to it must be varied.
In addition further tests may need to change their behavior depending on the data. E.g. A user registering may provide different data , and subsequent screens may change depending on what the user has entered or may be skipped altogether. A common scenario is when a user registers to a site has various options he can choose from , and we need to test the behavior of the site for different combinations
Solution
JMeter provides multiple ways to vary the data
a. Use of User Parameters [1]
b. Use of Variables [2]
c. CSV Data Set Config [3]
The solution we will use is Option c.
The advantage of using CSV Data Set Config is that the data is externalised from the test , and can be updated by any user including a non technical business person. By making the assertion a part of the data, users can add more tests without needing the test itself to be modified. The other advantage of a CSV Data Set Config over a User Parameters pre processor is that the number of items that will be tested is fixed independent of the number of threads you will run (assuming you write the test in that fashion) OR can be made dependent on the number of threads. User Parameters is more closely tied to the number of threads.
e.g. If you wanted to create 10 distinct users , you'd only have 10 rows in your CSV data set config and you could use 1 to 10 threads. But if you needed to do this with User Parameters you'd probably need to specify exactly 10 threads.
So the solution takes the form of
a. Create a CSV Data Set Config element and point it to the CSV files.
b. Create your tests to use this data
If you want as many tests as you have rows in your CSV file, then you can either end the thread or use a Loop Controller an check for the special value ''<EOF>"
Sample
References
[1] User Parameters
[2] Variables
[3] CSV Data Set Config
The same test's need to be run, but the data we pass to it must be varied.
In addition further tests may need to change their behavior depending on the data. E.g. A user registering may provide different data , and subsequent screens may change depending on what the user has entered or may be skipped altogether. A common scenario is when a user registers to a site has various options he can choose from , and we need to test the behavior of the site for different combinations
Solution
JMeter provides multiple ways to vary the data
a. Use of User Parameters [1]
b. Use of Variables [2]
c. CSV Data Set Config [3]
The solution we will use is Option c.
The advantage of using CSV Data Set Config is that the data is externalised from the test , and can be updated by any user including a non technical business person. By making the assertion a part of the data, users can add more tests without needing the test itself to be modified. The other advantage of a CSV Data Set Config over a User Parameters pre processor is that the number of items that will be tested is fixed independent of the number of threads you will run (assuming you write the test in that fashion) OR can be made dependent on the number of threads. User Parameters is more closely tied to the number of threads.
e.g. If you wanted to create 10 distinct users , you'd only have 10 rows in your CSV data set config and you could use 1 to 10 threads. But if you needed to do this with User Parameters you'd probably need to specify exactly 10 threads.
So the solution takes the form of
a. Create a CSV Data Set Config element and point it to the CSV files.
b. Create your tests to use this data
If you want as many tests as you have rows in your CSV file, then you can either end the thread or use a Loop Controller an check for the special value '
Sample
References
[1] User Parameters
[2] Variables
[3] CSV Data Set Config
Friday, June 05, 2009
Testing multiple environments with JMeter
Motivation
Most projects have a number of environments through which the code moves. e.g. A Development environment, A Test environment, A User Acceptance test environment, A reference environment and finally the Production environment. Hence the same test script is to be targeted to multiple environments. In theory all of this is automated , and anything that succeeds/fails in one environment would succeed/fail consistently in all other environments. However environmental differences and human errors almost always cause one to hear "But it works on my machine"
Whats needed is a way to run the same test against different environments easily
Solution
An assumption we are going to make here is that these tests are automated and would be run from the command line, in our case using ANT.
The solution has to address two basic requirements
a. Parameterization of the various environments
b. You should still be able to run the test in GUI mode (when you are modifying/extending the test)
To implement this we will use JMeter properties[1] and use normal ANT [2],[3] features.
While writing the HTTP tests, add an HTTP Request Default element
If the Server name or IP is to be varied then enter
${__property(run.server,,yourdevserver.com)}
This will look for a property named run.server , but will use yourdevserver.com if no property is specified. For the third parameter, use the server against which you want to run the test while running in GUI mode.
[1] JMeter Properties
[2] JMeter Ant task
[3] Ant Manual
Most projects have a number of environments through which the code moves. e.g. A Development environment, A Test environment, A User Acceptance test environment, A reference environment and finally the Production environment. Hence the same test script is to be targeted to multiple environments. In theory all of this is automated , and anything that succeeds/fails in one environment would succeed/fail consistently in all other environments. However environmental differences and human errors almost always cause one to hear "But it works on my machine"
Whats needed is a way to run the same test against different environments easily
Solution
An assumption we are going to make here is that these tests are automated and would be run from the command line, in our case using ANT.
The solution has to address two basic requirements
a. Parameterization of the various environments
b. You should still be able to run the test in GUI mode (when you are modifying/extending the test)
To implement this we will use JMeter properties[1] and use normal ANT [2],[3] features.
While writing the HTTP tests, add an HTTP Request Default element
If the Server name or IP is to be varied then enter
${__property(run.server,,yourdevserver.com)}
This will look for a property named run.server , but will use yourdevserver.com if no property is specified. For the third parameter, use the server against which you want to run the test while running in GUI mode.
Finally in your Build Script that you use to run Jmeter
<jmeter jmeterhome=".." testplan="${run.test.plan}"
resultlog="${report.dir}/${run.test.report}-${run.env}-${DSTAMP}${TSTAMP}.jtl">
<property name="jmeter.save.saveservice.output_format" value="xml"/>
<property name="run.server" value="${run.server}" />
</jmeter>
where the ANT property run.server can be varied to run this against different environment.
Sample
TODO attach ant and jmx file.
[1] JMeter Properties
[2] JMeter Ant task
[3] Ant Manual
JMeter Prologue
I've been testing my existing work application using JMeter for the last 6 months , so what follows are a series of posts that I think isn't covered in most documentation or available online easily or in the same place (I had to figure out these solutions myself). However there is a caveat to that, the solution may not be the best possible one , may not be efficient and there may be more elegant ways to do what the solution does. In which case please inform me. However the solutions do have one thing in common, they worked for me. Your mileage may vary.
The rest of the post is a rant , feel free to move on
A few observations on a Friday(with a tip of the hat to BusyBee)
That people who keep asking for Unit Tests don't have a clue about testing web based applications.
That data driven integration testing between multiple systems is damn hard to do , but boy is it satisfying when you actually find a defect due to it, and boy is it worth your time to develop these tests.
That open source tools are so much better than some of the paid commercial tools. Except when it comes to presentation of the reports. Which surprisingly is also the difference between a Techie and a Manager.
That there still is no tool to truly perform visual tests on a website.
And That someone who could develop such a tool could make a lot of money.
And that someone wont be me.
That there is no time to test all the normal cases, much less the weird corner ones. and that there would always have been plenty of time in hindsight, if we had just gotten our act together earlier
That a successful test is the one that fails in local,development, test, QA , UAT environments. That a test thats succeeds in the above environments but fails on the production environment is a pain.
That a web based test should never underestimate the ignorance of a user.
That developers do not make good testers. And that most developers are better testers than the official testing team, especially when we test other developers code. And that scares me.
The rest of the post is a rant , feel free to move on
A few observations on a Friday(with a tip of the hat to BusyBee)
That people who keep asking for Unit Tests don't have a clue about testing web based applications.
That data driven integration testing between multiple systems is damn hard to do , but boy is it satisfying when you actually find a defect due to it, and boy is it worth your time to develop these tests.
That open source tools are so much better than some of the paid commercial tools. Except when it comes to presentation of the reports. Which surprisingly is also the difference between a Techie and a Manager.
That there still is no tool to truly perform visual tests on a website.
And That someone who could develop such a tool could make a lot of money.
And that someone wont be me.
That there is no time to test all the normal cases, much less the weird corner ones. and that there would always have been plenty of time in hindsight, if we had just gotten our act together earlier
That a successful test is the one that fails in local,development, test, QA , UAT environments. That a test thats succeeds in the above environments but fails on the production environment is a pain.
That a web based test should never underestimate the ignorance of a user.
That developers do not make good testers. And that most developers are better testers than the official testing team, especially when we test other developers code. And that scares me.
Subscribe to:
Posts (Atom)